@ckeen @tomasino

by using, lets say, a more advanced and modern shell - that has arrays implemented -, it would be even faster.


@rodolpho you bet. Which email address would you like me to use?

@tomasino thanks a lot! me at rodolpho dot dev is fine, thx again!


Gives a whole new meaning to the phrase: "Go away, or I'll replace you with a very small shell script"...

@tomasino I don’t understand Markov chains very well. I think I do... But I don’t. How hard would it be to add a memory to this to occasionally check against? Like if Alice says “hello goomba”, then the markov runs as usual, but also scores “goomba” higher in the fortune draw for next time?

@Shufei no idea. I'm not deep in the chain world at all, but there's some really good people on the fediverse that know a lot. Maybe some of them will watch the hashtag and help out

@tomasino @Shufei it depends. In my own implementations I basically have a big table for all the pairs (or triples) of preceding elements (characters, words) and so I just look at the possible completions at the current position by looking at all the rows starting with the previous two or three elements and picking the new element by looking at that filtered list. That’s why the Markov chain can never really generate new text. [cont]

@tomasino @Shufei But to increase weights would be feasible by temporarily increasing the weight or occurrence of the rows picked. If you do that, however, you need a way to decay that as well or else I suspect the model will generate into a generator that keeps repeating itself, going in circles. Perhaps better: given knowledge of a topic outside the model, eg. “on topic is goomba and mario” then all rows containing these lines are temporarily boosted? Sounds more workable!

Sign in to participate in the conversation
Mastodon @ SDF

"I appreciate SDF but it's a general-purpose server and the name doesn't make it obvious that it's about art." - Eugen Rochko