08-07-08 - 2

I think I'm going to post my poker AI program "GoldBullion". I'm a little worried about the consequences, but fuck it. I don't think the poker sites can come after me in court since they're all illegal offshore operations. Plus the code is hard enough to dig into that probably nobody will ever look at it.

I really like a lot of the stuff I did in there. Some is very generic programming stuff :

One trivial thing worked really well - I generated reports in HTML. On each poker spy window you can click a button to see the recent hand history. I write an html file then just pop a browser pointed at it. This is so much nicer than all the custom GUIs people do because a browser actually has nice layout and fonts and all that. Plus, it means the user can just copy-paste out HTML of hands to share with friends or whatever. It's also smart about what it shows. It doesn't dump every single hand, it just dumps the last few hands, and then also the very significant hands in the past. This makes it much easier to scan and find the information you really want.

Another that worked nicely was the way I did my database. The hand history database is kept in two files - one a raw journal of every hand seen in a very complete flexible format; this never goes out of date. The other file is a flat baked memory dump of the cached player stat structures, so it can be loaded very fast. The journal only ever gets appended to, and I also used a special temp append file to make the appending crash-safe so that you never corrupt your journal. Anyway, the part I really liked is that when the cached data structures change or whatever, the program detects it and loads the journal instead automatically and regenerates the fast version. While developing I never have to worry about my fast structure being backwards compatible, and I never have to run a "database fix" program, I just keep running GoldBullion, usually it starts up instantly, and sometimes it detects a bad cache file and takes a while to start up. Very nice IMO.

Some of the debugging stuff was also very cool. Checking on the bots is not trivial because there are so many situations they can be in, and many of them are trivially easy, you can't really just give them a regression test that you can step through and verify they're working - they might look good on 100 hands and then do something weird on the next; it's also really hard to define what right or wrong behavior is. But there are things you can do. For one thing having the old fixed-function TTH bots was a nice baseline. It lets you play your research bot against them and not worry about them being buggy or changing. Just letting the bots sit and play against each other was very good; I would use it in 2 ways - 1. I could log the hands generated and make statistics on the players just like I could on live players, and then you can look at the stats of how your bot is playing over 100,000 hands and see flaws in things like the aggression numbers or the continuation bet %, and 2. I could use the same tools I used on real histories to filter for the very interesting hands and manually examine how the bots played those hands. This leads to the next cool debugging tool - I could make the bots replay any hand history, and have them log a bunch of info about what they're thinking at each step. I could also output the 13x13 bitmap image of the bayesian hand probabilities and see them change over time visually.

No comments:

old rants