Valhalla Legends Forums Archive | Battle.net Bot Development | An AI bot that pretends to be a real person

AuthorMessageTime
MrMachineCode
Edit: I have added whitespace to make this easier to read. I realize now that the post was like one run-on thought, so it was hard to find good places to put the breaks.

I have already written a program that can carry on a conversation, (like the Eliza program) but it's different from Eliza in that it actually learns and remembers what is said to it. I later found out that what I had come up with was actually something that has been around for years; I had reinvented something known as the Markov model. No matter. Basically the way it works is that it analyzes everything it "hears" and uses that to expand it's database of known words and possible orders that those words can appear in.

To generate a reply, it takes part of someone else's sentence (one or two words) to use a seed, and it then uses it's database to "grow" more words onto the sentence, kind of like how crystals grow off of a smaller crystal. In this way it combines the last thing that someone said, with other things it's heard in the past, to produce a comment that (hopefully!!) has some sort of relevance. In any case, it usually comes up with some hilarious replies.

I figured it would be fun to take that program (it's all ansi C++, and fairly modular) and use some chewing gum and bailing wire to tack it on top of some bnet bot code. (Right now the program isn't internet-aware, it only talks to the person at the keyboard.) Given the general level of b.s. in public chat, it could probably hang out in a channel for quite some time before you realized that the answers it gives made any less sense than most of the things the real people there say.

Now, a friend of mine has already programmed this bot in mIRC script, and it's fun as hell to play with, but what we found out working together on it is that it's tricky to get the right balance of frequency of replies, so that people don't get annoyed. For instance the bot did fine when there were 5 ppl in the mIRC chat room, but soon there were more ppl and when there were 20 ppl talking, and the bot responded instantaneously to each person, it quickly got kicked for "flooding" although technically it was only trying to keep up...

Well more so than that, I guess it's just that you see so many spam bots, and bots that annoy people, that I want to take extra care to make my AI program "polite". I'm sure that I'm not the first person to think of putting an Eliza-class program on bnet, why do I never seem to see them though?

Anyhow, rather than annoy you with technical questions of the programming when I can RTFM, I'd rather ask, what's your opinion on things that should be done to make a bot that's funny but not over the top or hogging the channel?
September 18, 2003, 6:39 AM
Camel
Paragraphs, please!

[edit] This also has nothing to do with battle.net botdev
September 18, 2003, 6:51 AM
MrMachineCode
[quote author=Camel link=board=17;threadid=2739;start=0#msg21568 date=1063867895]
Paragraphs, please!

[edit] This also has nothing to do with battle.net botdev
[/quote]

@ Paragraphs: point taken. I plead forgiveness; I've been corrupted by too many postings to forums that automatically strip all your whitespace and paragraph breaks, to the point I stopped using them.

@ Has nothing to do with botdev: In what way does this not have anything to do with development?
September 18, 2003, 7:51 AM
Camel
This is more of a general programming question: It is not something unique to a battle.net bot.
September 18, 2003, 8:02 AM
DaRk-FeAnOr
It kinda does have to do with bot dev, because this is an idea for a bot that he is asking for opinions on. It would be nice to have a bot that works like that and helps people with questions they might have, but the person asking a question would not be able to tell if it is a bot or not.
September 18, 2003, 3:14 PM
iago
Actually, writing something like that was supposed to be one of my summer projects, but I never got past writeing a bot that can connect.

In my opinion, it would be best if it just targeted a single specific person. Like, if there were 10 people talking, you would set it to reply to "iago" only. Also, a little delay would be nice (2 or 3 seconds) so it doesn't seem quite so obvious.

If you want to discuss any of it, feel free to IM me, my info is at the left <--


Finally, this IS botdev related, and Camel's opinion doesn't count for anything, so don't worry about him :-)

Edit: You can edit your own posts to make them more readable *cough*
And which forums strip whitespace? That's just dumb!
September 18, 2003, 4:24 PM
Camel
[quote author=iago link=board=17;threadid=2739;start=0#msg21577 date=1063902256]this IS botdev related[/quote]
This is true. If only the forum was named "General Bot Development," it would be the perfect thread.
September 18, 2003, 5:15 PM
K
This is something I have done, although I did not write the "AI" myself -- I wrote a plugin to my chat bot using some of the resources available from the Alice Project: http://www.alicebot.org/

One pointer, if you want to run something like this on battle net:

don't let *everyone* trigger the bot, and don't let it be triggered by everything they say -- you'll flood off in the first case, and annoy people in the second. For example, AliceOnline responds only when a person is in her user list (or when loose mode is set), and only then when what they say contains "alice." This keeps the amount of random drivel that she says down to a minimum, and keeps the channel (OTS) happy.

Good luck! And make sure you implement whisper responses -- it's always fun to see how long it takes people to realize that alice is not a girl and not going to cyber with you.
September 18, 2003, 5:34 PM
Spht
[quote author=Camel link=board=17;threadid=2739;start=0#msg21590 date=1063905336]
[quote author=iago link=board=17;threadid=2739;start=0#msg21577 date=1063902256]this IS botdev related[/quote]
This is true. If only the forum was named "General Bot Development," it would be the perfect thread.
[/quote]

Although he never specified what the bot was for, he probably posted here because the bot is for Battle.net, which is fine. If later specified that it's not for Battle.net, then someone may move this thread.
September 18, 2003, 5:56 PM
iago
Only responding to people who say "alice" is a bad idea, because half the fun is them not realizing it's an ai :)
September 18, 2003, 9:48 PM
MrMachineCode
OK, I can see that. You're right; although this *will* be a battle net bot, the question was a general one I ought to have posted to the general bot development board, not battlenet bot development. I don't know if on this board the moderators move threads, or if I should post a continuation there.

Picking out one person to talk to at a time, that is a good idea, one I hadn't thought of. I can add some randomization to it so that it will occaisionally decide to switch topics, but not try to comment on everything.

The thing about responding to it's own name, made me laugh because it's similar to a problem the mIRC version had. The bot talks about whatever you say to it, so whenever someone asked it if it was a bot, it would begin to spout, in a warped manner, mixed up combinations of other sentences it knew about bots, and thus reveal itself!

Myndfyre: thank you for your email about my post, unfortunately I was unable to reply to it because the return address on the email you sent me is not a real email address.
September 18, 2003, 10:32 PM
K
[quote author=iago link=board=17;threadid=2739;start=0#msg21610 date=1063921717]
Only responding to people who say "alice" is a bad idea, because half the fun is them not realizing it's an ai :)
[/quote]

Well, that's only partially true -- it actually responds to "ali" -- not a combination used very often, but I realize it comes up in some words. It also always responds (via whisper) if you whisper it -- which is generally what people wanting to cyber do. I wish I could find some good logs.....
September 18, 2003, 10:51 PM
iago
There isn't a general bot-development board, it's ok that it's here..
September 18, 2003, 10:57 PM
Kp
Have it pick a random number in some moderately small range, say [3, 8). After that many talks, respond to something and roll a new random number. Alternately, instead of using that number to control how often it talks, use it to control how often the bot switches the people to whom it listens. Also, if you have it restricted to a certain subset of people to which it responds, you may want to have it "get bored" if the person stays quiet too long. Otherwise, it might go silent if all the people it is watching stop talking for a while. How you determine bot boredom is a matter of preference, of course.
September 19, 2003, 2:48 AM
Adron
[quote author=Kp link=board=17;threadid=2739;start=0#msg21654 date=1063939710]
How you determine bot boredom is a matter of preference, of course.
[/quote]

"Boredom" and other such factors could be displayed in some kind of pretty interface to those who are interested in how the simulation is working. You could simulate boredom, hunger, mood, sleepiness and many other factors...
September 19, 2003, 12:24 PM
MrMachineCode
That's a neat idea, give it pseudo-emotions. I like to keep the basic algorithm as straight-forward and uncomplicated as possible, so if I do that, it would be a separate module from the "logical" part of it's "brain". An add-on, like Data's emotion chip.

One thing about the algorithm is that the longer the input phrase, the more interesting it's replies are. It tends to respond in kind to what people say to it, so if you only say one word you'll tend to get one word back, but a full sentence usually triggers a different full sentence back. I think I may make it so that the more words there are in someone's statement, the more likely it is to try to generate a reply off it. I'd have to put in safe-guards to keep it from responding to spam. One possibility is that if someone repeats the same, long phrase more than once (like when someone's spamming something) the bot could automatically quit "seeing" it.

I could also make it compare what it's about to say, to all the things that have been said before, and have it reject that response if it's just repeating itself. This would both prevent people from "teaching" the bot to spam, and would make the replies more unique.

After many hours of thought, I realize that a Markov model AI can actually be trained in two different "directions". What I mean is, the way I programmed the AI right now, it looks at sentences "horizontally", left to right. It looks at one person's sentence and learns that one word in a sentence usually comes after some other word. This doesn't work so well when people in a chat room are only using one or two words, then there's just not that much variation.

But there is another way to use a Markov model! You can also train the model vertically in the chat room! What I mean is, take something like this:

Bob: afk, brb
Tom: ok
....
Bob: back
Tom: welcome back

Training the bot vertically, i.e. across statement to statement, instead of horizontally within one statement, the bot will watch this exchange and it will associate these statements to one another, it will learn to say, "welcome back" when someone says "back"!!
September 20, 2003, 12:06 AM
EvilCheese
On the subject of bot AI:

If you were planning to implement a bot that simulated emotions, it would probably be better to implement it as a state-based AI with a fuzzy-logic element, similar to the awareness AI for the bad guys in a game like Thief.

Basically predefine a set of possible moods, which could either be all discreet, or combinable (up to you).

Also define a set of stimuli which can affect those states, for example the bot may head towards "jovial" when people use the word "lol" a lot, finally reaching jovial at some point affected by a pseudo-random factor.

The other approach would be to have all moods present at all times and arranged in what I would call a "flower petal" arrangement. Take this picture as an example:

[img]http://www.ninjazone.net/AIPetal.gif[/img]

The cross represents the current "mood" of the AI, and affects to some degree the choice of responses, response structure and response frequency.

Each defined stimulus would act as a vector, moving the current mood by a specific factor in one or more directions, allowing a combination of moods.

Obviously you could include as many petals as you liked into your arrangement, but remember to keep opposing emotional states on opposing petals, else you might end up with your AI being happy and sad at the same time :P
September 20, 2003, 1:03 AM
Adron
[quote author=EvilCheese link=board=17;threadid=2739;start=15#msg21740 date=1064019832]
Obviously you could include as many petals as you liked into your arrangement, but remember to keep opposing emotional states on opposing petals, else you might end up with your AI being happy and sad at the same time :P
[/quote]

Well, being happy and sad at the same time is possible. It's just not common, and it has to really attain both states at once. If it alternates between happiness and sadness it'll seem like a psycho :P

In general though, the bot having feelings that control parameters of the response process would probably make it much more realistic. When people go on talking about things that don't interest the bot, it goes silent, and when people come into a subject it's enthusiastic about it'll talk much more.

When people piss it off it'll get angry or perhaps pout, that is, unless it's mature enough to just ignore the insults. Or, the person might get banned, or someone else might say something to make it happy or bring up some interesting topic.

It could also get less likely to talk to people it doesn't like. It might dislike people trying to get it to spam and turn to just giving them an insult when it sees them and then keeping its mouth shut.
September 20, 2003, 10:46 AM
Myndfyr
[quote author=Adron link=board=17;threadid=2739;start=15#msg21777 date=1064054785]
Well, being happy and sad at the same time is possible. It's just not common, and it has to really attain both states at once. If it alternates between happiness and sadness it'll seem like a psycho :P
[/quote]

Okay, now this thread is getting philosophical.
September 21, 2003, 6:56 PM
iago
No, it's getting into the trickier questions about AI.. we discussed these things in AI class, and they're interesting problems.
September 21, 2003, 11:11 PM
Adron
[quote author=Myndfyre link=board=17;threadid=2739;start=15#msg21886 date=1064170567]
Okay, now this thread is getting philosophical.
[/quote]

It's not my intention to make it philosophical. I try to approach this scientifically. Same goes for my approaches to the soul question - I try to give a scientifical comment/answer, not something based in not knowing anything i.e. knowing nothing about anything.
September 21, 2003, 11:18 PM
Arta
This is a fantastic idea! I love it. I hope you complete and release it with code at some point, I'd be very interested in looking it over :). Seriously, this is the best idea to arise from botdev for ages and Camel should just stfu (as usual).
September 24, 2003, 4:32 AM
iago
[quote author=Arta[vL] link=board=17;threadid=2739;start=15#msg22069 date=1064377953]
This is a fantastic idea! I love it. I hope you complete and release it with code at some point, I'd be very interested in looking it over :). Seriously, this is the best idea to arise from botdev for ages and Camel should just stfu (as usual).
[/quote]

My friend is working on that for his final project for AI class.. hopefully he'll have something useful by the end of that :)
September 24, 2003, 5:37 AM
MrMachineCode
[quote author=Adron link=board=17;threadid=2739;start=15#msg21777 date=1064054785]
Well, being happy and sad at the same time is possible. It's just not common, and it has to really attain both states at once. If it alternates between happiness and sadness it'll seem like a psycho :P
[/quote]

Hey now! I happen to be bipolar (manic depressive), so I really do alternate between happy and sad sometimes! Are you calling me a psycho? lmao. No, I'm not really offended by that, I just think it's kind of funny.

My understanding of what science knows so far about the way human emotions work is that there are certain chemicals in the brain, and the level of these chemicals influences how we're feeling emotionally, by increasing or inhibiting different brain cells. So for an AI, representing it's emotional state by a combination of *vectors* is quite plausible, as those vectors could correspond to the *levels* of chemical messengers in a real brain.

I'm not certain, however, that different emotions ought to be on opposite ends of an axis. Rather I think it makes more sense to have each emotion on it's own vector, and so you could have the AI be, for instance, happy and sad at the same time, or any other combination. Each emotion could *affect* more than one attribute of it's behavior, and some emotions would have mutually opposite effects on some but not all of the attributes. That way, the AI could have more than one emotion at once, but some emotions could neutralize each other.

Unfortunately the discussion of putting emotional states into my AI is, for me at least, just mental masterbation because I have no *earthly IDEA* how to actually code that into it.

On a more encouraging note, I believe I've figured out the solution to the problem of when should the bot talk and when should it be silent. The vertical model will be continuously learning the context around each phrase, and it stands to reason that there will be times when the vertical model is going to have a very good idea for something to say, and other times when it just won't have a clue what to say because it's seeing a discussion on something it's not familiar with yet. So when it doesn't have a good idea what to say, the bot should be silent, and when a topic comes up that the bot has become familiar with, then it can reply because it will (hopefully) have something relevant to add.

The threshold for how "sure" the bot has to be before it will reply can be automatically regulated. How so? Well you don't know in advance what people will say, so you don't know if they'll say lots that the bot knows about, turning it into a motormouth, or lots the bot knows nothing about, making it silent. So what you do is you give the bot a general guideline of how many things per minute that the bot should say. It continuously monitors how often it's said something against how often you want it to say things and if it's talking *too much* it starts upping the threshold (meaning the vertical model needs to be more sure before it says something); and if it's talking *too little* it can *lower* the threshold until it's talking more again.
October 9, 2003, 7:53 PM
Adron
[quote author=MrMachineCode link=board=17;threadid=2739;start=15#msg23566 date=1065729189]
It continuously monitors how often it's said something against how often you want it to say things and if it's talking *too much* it starts upping the threshold (meaning the vertical model needs to be more sure before it says something); and if it's talking *too little* it can *lower* the threshold until it's talking more again.
[/quote]

That doesn't make total sense. If it just changes the threshold so it's always talking at the same rate, then it won't seem interested in the topics it knows and uninterested in the ones it doesn't...
October 9, 2003, 11:46 PM
MrMachineCode
I just put the core functionality of the bot into a DLL. It only implements the horizontal models, not the vertical model and it doesn't take into account the "count" of how many times each pattern comes up (all have equal weight) but it DOES work.

It consists of ChatAI.dll, which is the AI itself, and other dll's will be created as necessary to provide wrapper functions for specific interfaces. So far I've started on mAIintf.dll, which interfaces the chatAI.dll to mIRC scripts.

For things that can call the functions directly, such as a program written in Visual Basic, ChatAI.dll is all you need. For programs such as mIRC that expect a specific function prototype I can write interface DLL's as long as I get enough information on what they require.

I've gotten it online with mIRC with mixed success--one person talked to it for 3 hours without ever figuring out it was a bot, instead he thought "she" was talking strangely only because she was on drugs. (Because the bot told him "she" was on drugs. I made the bot's name female because more people PM'd it when I did that.) OTOH, it's gotten me permanently IP-banned from at least 6 EF-net IRC channels so far, so use it at your own risk. For now I think the best thing would be to have it stay mostly silent in public chat, and talk freely when someone PM's it.

If someone wants to collaborate with me to put this on battlenet, I'll let you use the DLL if I can have a copy of the bot you interface it to. I plan to put the DLL up for download on a website, along with documentation explaining all the functions the DLL exports and the format of the commands. Sending commands to the AI will be a lot like sending commands to Stealthbot; passing the function AI_do_command() the string ".CLEAR" will clear the AI's memory, passing the string ".SAVE filename.bin" will save the AI's brain to filename.bin, etc.

Send me an email (dennisf486@yahoo.com) for more information, and please include an email address for me to reply to.
November 2, 2003, 11:58 PM

Search