Card Games

And now for something completely different… This page is about a project that I’ve used to demonstrate techniques in simulation and data science/machine learning. It’s not an academic research project,  but instead a platform to expose students or business/industry types some key ideas. Without further ado, let’s introduce the project:

Concept: Card Games

I thought of the idea for this project while playing Euchre, a popular card game here in the Midwest. I wanted to demonstrate how a computer program could play a card game. Euchre rules are kind of complicated and the game is played with partners, so for clarity I decided to scale back the project to a simple trick-taking two player game.

The rules for the game are as follows: We play only with the two, three and four cards from each suit, for a total of twelve cards. Three cards are dealt to each player – by convention we call one of these players ‘Player’ and the other ‘Computer’- the idea being that a human will be playing the computer – and a final card is revealed to both players. This card will determine a trump suit for the game. This leaves five cards in a ‘kitty’ which is not revealed to either player, injecting an element of the unknown into the game.

Each player then bids how many tricks they believe they can make; bidding is blind so the Player and Computer do not know each others bid. Play then starts with Player leading any card. Play is trick based and suit must be followed whenever possible. At the conclusion of a round (after all three tricks are played) each player receives one point for making their bid and nothing otherwise. Play could then continue until a player reaches a set amount, but this doesn’t concern us here.

How Do We Begin?

How can we teach a program to play this game? Our starting point will be simulation. Working in Python I first create the code that governs the  basic play of the game but does not attempt to play the game well. Instead, this code contains functions for randomly creating card hands and the trump card, determining which card wins when two cards are played in an individual trick, and other functions that simulate the deal, play and rules of the game.

This basic code also contains an all important function called ‘play_out()’. This function takes as its arguments the Player and Computer hand (stored as lists), the trump card, and some optional arguments, then randomly ‘plays out’ the game with these two hands. Crucially however, it does not simply pair off cards but follows the rules of the game with respect to following suit, trump and leading. Nevertheless, card selection is random subject to these constraints.

Building the AI

With these basics in place we can start to build the AI via simulation. The first step is selecting a bid from the cards that were dealt. We do this by fixing the Computer hand and trump card, but randomly drawing a Player hand. We then use the play_out() function to see what happens. Of course, this play out is unguided and random, but by repeating this procedure about a thousand times we start to build an picture of what we should bid. The possible bids are zero through three and a function get_bid(), which does the aforementioned repeated simulations, returns a list of options which looks like [112,204,578,106]. This means that out of the thousand simulations we ran, 112 times zero tricks were taken, 204 times one trick was taken, and so forth. We then base a bid on the outcome that is most probable, in this case we should bid two tricks, since 578 times we made two tricks. Of course, the objection here is that these numbers aren’t coming from a smart Player but instead are coming from random plays. But simulating a ‘smart’ player would require an AI of its own, so we have a chicken and egg problem and we’ve got to start somewhere.

I need to mention here that this step is dependent on the particular scoring of this game. I took a simple scoring of 1 point for making a bid, 0 otherwise, therefore we select the bid that is mostly likely to be made. Another scoring method would be (1+Number of tricks) so a 0 bid is worth 1 point, a 1 bid is worth 2 points, etc. If this was the scoring system then we might want to select a bid that maximizes the expected return. This is actually quite tricky since it depends on whether we’re playing up to a certain number of points (consider the example of playing up to only 1 point vs. playing up to 1000 points, how does the strategy change?).

Actually Playing the Game

Having selected the bid, we then construct a computer strategy function cmp_strat() which selects (within the rules of the game) the best card to play at each trick. It does so by again simulating thousands of games using the play_out() function and picking the card which maximizes the probability of making the Computer’s bid. In this way the AI has to think many moves ahead. It performs this operation with every trick and uses the fact that play_out() is written to play out hands of arbitrary size.

Outcome and Analysis

We can actually pit the ‘Player’ and ‘Computer’ against each other, each using the same AI and see what happens. A good ol’ fashioned pivot table helps us understand the results:main_pivot

So what’s going on here? NT = number of trump cards in Computer’s hand, N2, N3 and N4 are the number of non-trump 2’s, 3’s, and 4’s while ‘RESULT’ is the difference between what the Computer bid and what it made (CBID and CMADE). A RESULT of 0 means that  Computer wins that game. In the above table 5000 games were simulated.

Couple of things to note about the table. I’ve sorted the table by ‘weight’ which measures the impact of that case on the overall performance of our algorithm. This helps us determine how we might want to tweak the algorithm. It makes little sense to try to improve the case when N4=3 since that case only occurs 30 times out of 5000 games. The other thing to note is that overall Computer is successful almost 80% of the time. This may seem extraordinary since it’s playing a Player using the exact same AI, but recall that in our game Player always leads on the first trick. This essentially means that Player has to reveal a third of ‘his’ hand before the Computer has to play a card. This actually gives the Computer a huge advantage; when I change the internals to have Computer lead, the situation reverses itself. Perhaps this is something to keep in mind if you every play a game like this in a casino…

We can also break it down by the number of trump cards in the computer’s hand:

a1

We see that the algorithm gets better the more trump cards are in the Computer’s hand.

Decision Science Approach:

Data science and machine learning provide another approach to the problem of constructing an AI. We couldn’t take this approach in the beginning since, short of recruiting people to play thousands of games against Computer, we didn’t have any data. But now we do, and moreover there are good reasons to inject some adaptive machine learning to our work. First, a human player isn’t going to play the same way as Player did using our AI. Second, the algorithm is computationally intensive; even with just three card hands it takes about a half-second to simulate a game. What if we wanted to play a version of this game with 5 or 6 cards in a hand? The required computations would quickly become excessive.

We consider our options. The easiest target is the get_bid() function. Our pivot table above suggests that we might be able to predict the bid using the variables NT, N2, N3 and N4. There are several approaches we could take with this but the easiest is decision tree. Using the package rpart in R we have the following tree used to predict our bid:

dec_tree
Decision Tree for Card Game

Interestingly enough, the decision tree is more accurate when we use it to predict CBID (the Computer’s bid) instead of CMADE (the actual number of tricks the Computer takes); the tree above is predicting CBID. We can randomly split the 5000 games into a model and validation set and record the accuracy of our predictions by creating a tree with the model data, predicting the validation set using that tree and comparing to the actual results; this is done fifty or so times (I forget) and we summarize the result:

a2
Decision Tree Summary

The table is interpreted as measuring the accuracy of our decision tree(s) in predicting CBID and CMADE overall, as well as the cases with trump or without trump. So for example, overall (as measured by mean) the decision tree correctly predicts CBID a total of 1-.116 = 88.4% of the time whereas it only correctly predicts CMADE a total of 1-.245 =75.5% of the time. Note that just like the simulation algorithm, our tree is better at predicting when there is trump in the hand.

By way of comparison recall that the simulation correctly predicted the number of tricks (CMADE) the Computer would take about 80% of the time. Now at first glance it may appear that the tree isn’t as good; after all it predicts CMADE only about 75.5% of the time. But this is misleading since the actual AI in the game tries to make whatever bid is selected. So if the decision tree selects one bid and the get_bid() function selects another it is possible the the computer could have earned either bid in that given hand! Actually, a subtle analysis of the data (which I will not present here) suggests that the computer bids 3 less often than it should; that is, it frequently bids two when it actually has a hand strong enough to make 3. But most of the time it earns the 2 bid because once that bid is selected it will play towards that target. As an example if NT=2 and N4=1, then this hand can always be played in a way which yields 3 tricks. But most of the time the computer will bid 2 instead and ‘waste’ the 4 card on the first trick (if it takes the first trick then it will take all three).

So to really compare the performance of our decision tree to the get_bid() function we will need to code the tree into our Python program then simulate 5000  games using the tree to select the bid and compare the results. We get the following table:

dec_tree_results
Results using Tree for Bid

Rather remarkably the tree is actually better! About 81% of the time it makes its bid. Note that I did not alter the ‘Player’ bid/strategy at all, only the Computer’s; the results are truly comparable with the previous table. Interestingly, the tree is significantly better at the case when N2=N3=N4=1, which is also the most common hand to be dealt. We can investigate this further by looking at a table for the get_bid() function:

bid
Summary of get_bid()

There’s some interesting data here. For example, the tree always bids 0 on the 1-1-1 hand and makes it 87% of the time whereas get_bid() bids either 0 or 2 and makes it only 76% of the time; it would be better off always bidding 2. However, notice that the 1-0-2 case is split almost evenly between 0 and 2 bids with the Computer making this bid about 69% of the time (from the first table on this page). In contrast, the tree always bids 2 and makes it only 58% of the time. So each bid method has its strengths and weaknesses.

 

Conclusions:

I’ve barely scratched the surface of what’s possible in this sort of problem, both from a simulation perspective and from a data approach (or both working together!). One could, for example, construct a hybrid bid algorithm which uses either get_bid() or the tree depending on which method is superior in that particular case.

Other approaches to the data analysis could be used as well. The bidding, with its possible outcomes of 0,1,2 or 3 seems well suited to a multi-nominal regression approach. A more substantial challenge would be using a data method to replace the cmp_strat() function. Recall that this function selects the ‘best’ card to play by simulating thousands of games to determine which card gives the Computer the best chance of making its bid. This is computationally intense (scales very badly as hand size increases too) and it would be desirable to replace this with something that didn’t require simulating thousands of games. But unlike bidding where any choice between 0 and 3  is possible though not necessarily desirable, actual game play is restricted by the rules which complicates the modelling.

Anyhow, I hope this rather lengthy post gives you a taste of what’s possible using these techniques. Thanks for reading!

 

Advertisements