Evolution of the code
The code is still buggy, but nothing compared to the first 0.5 version online one year ago :-)
I wish to thank all of you who report bugs, provide encouragements and donate for Squadron Benchmark. The code would not be in its current state without you ! Testing this kind of webapp automatically is nightmarish, and developing
with no test goes nowhere, so actually when you report bugs, you
contribute to half the development, really.
The app, in term of code, uses javascript, HTML5, CSS3 and SVG. SVG+javascript is rather convenient for displaying moving units, computing intersections and distances, the display of the game area is rather stable. CSS3 is powerful enough for some fun 3D effects (maybe too many in squad creation...).
Now for javascript development. After one year of development, the code has 15000 lines of javascript, with 7800 lines just to describe the specificities of 180 pilots and 190 upgrades. The rest is the game engine. Roughly speaking, each pilot and upgrade requires, on average, 15 lines of code to describe its special effects in term of game mechanics. During this past year, I counted ~100 new cards published by FFG, including pilots and upgrades. If FFG publishes the same number of new units the year to come, this means 1500 new lines of code, assuming there is no new game mechanics. That should be sustainable !
The game engine by itself still need some improvement, some rules are not fully implemented, the X-Wing rule FAQ does not yet apply completely. This is one of the main goals of this second year of development.
Artificial intelligence
The AI is very simple. There are 340 lines of code dedicated to the AI, so no miracle ! This was not on top of my priorities, bug correction first, but critical bugs tend to be solved, so it's time for AI. One possibility is to let all players customize the AI of the squads. There is still some effort here before it can be customized, and I need to implement higher level functions.
This is one of the main objectives of the forthcoming developments.
All the successfully completed combats are automatically registered in Google spreadsheets. You can replay any of them through battle log (the black book icon). There are actually 7 spreadsheets with 2000 combats each, so ~14000 combats are available on line. I think it's an unused feature still, probably because you have to select or create a squad before checking its battle log. It's not possible so far to directly request the database for squads. I'm open to any suggestion anyway on how to use this, that's the only way to share information between players so far.
I use this database to compute some stats however, such as the "quality" of the AI. For instance, the average score, computed every 200 combats between player/AI, is shown in the following figure.
So one vertex corresponds to 200 combats, and the y-axis shows the % of total points won by the AI. The x-axis corresponds to time (from Oct. to Apr.)
Here the AI makes in average 25% of the score. So for a 200 point combat, the average score would be 50 for the AI, 150 for the human player. This is an average, most of the time, the AI as a null score but from time to time, it manages to win. Multiplayer game
Squadron benchmark includes most of the rules, including rules of the upgrades/pilots. As this is only a one player simulator, it's a complementary tool for the real table top game, for training and testing. As it was said in other places, it has not the look and feel of a miniature game of course, and playing for real against a real human is way better. Making the app for two player games, it could be seen as a computer alternative to the real game. I'm not going to do it without the explicit agreement of FFG.
The Dark Side of Squadron Benchmark
The source code of Squadron Benchmark is available on line, on github. However, it's developed using vanilla javascript and css, there is no developer documentation, no comment, no test. As my time dedicated to this is limited, I prefer to code rather than document or coordinate the dev. I do it for the fun, for the challenges, and for trying new ideas and new techs, I don't care about a clean project management. Don't expect too much in this direction therefore ! :-)
I'll document this anyway, at some point. As I'm not especially competent for the development of AIs, this is a good way to contribute if you wish :-) (well, as soon as a minimum set of functionalities for the AI is ready to be used anyway).
In particular, if by chance someone is motivated by deep learning and willing to adapt/use ConvNetJS for Squadron Benchmark, I'll gladly help ! I don't know if the AI can learn by itself how to play and find interesting strategies, or even if ConvNetJS can scale to the complexity of X-Wing miniature game rules, but it's worth a try and should be fun !
Cheers !