Monday, June 18, 2012

Continuous Deployment: Doing the mundane several times a day

I'm a huge fan of Continuous Integration, as well as Continuous Delivery, and I think it should be used to it's full advantage - especially in enterprise or other non-trivial projects. Even if you can't do the impossible 50 times a day, you can reap the benefits with fairly low effort. Here's how I managed to do the mundane several times a day.



I recently got a new job and one of the things I did on my very first day was set up a Continuous Integration server. Most of them support similar features but I chose Jenkins because it seems like the most popular open source choice. Jenkins was a breeze to work with. Just download it, install it, start it from the command line, and point your web browser at it. You can add new projects, configure it, download new plugins, and even install it as a Windows service right from the web page. Easy-peasy.

The first thing I did was automate the build process for the project I inherited. Just a batch file that told MsBuild to rebuild the entire solution.

After that I set up a very basic integration job for the project I inherited. Initially the Jenkins job just compiled the project but after a couple days I added a file with two solution-wide msbuild targets: one for the developers that calls code inspection tools and prints the results to the console and one for the integration server that does the same code inspection then dumps the results to an xml file. Once you download the Violations plugin and configure your Jenkins job, it will show a nice chart of how many code inspection warnings you have over time. Let me tell you this right now: it's nice to have a physical measurement of how your changes affect the quality of the codebase. Real nice. For our first code inspection tool I chose the standard in the .NET world: FxCop. FxCop can't tell if the overall architecture or design is good or bad but it has an extensive library of rules regarding more minor (although still important) things like following the naming conventions, disposing IDisposable instances, avoiding unnecessary casts, and many other "in the small" best practices. There are a couple I disagree with (like making a method static if it doesn't affect any instance fields) but it's easy enough to disable those rules. FxCop won't make a bad product great, but if you set aside time to fix a few warnings each day, it can nudge you toward cleaner code.

After a couple weeks I was able to start writing some unit tests and add those to the integration build. The NUnit plugin for Jenkins will take the results and make a few nice charts that show your current test runs as well as test runs over time. Again, charting progress over time really gives these things more meaning.


The second thing I did with Jenkins involves the deployment process. We don't deploy directly to production but we do have a nightly deployment process for the project I'm working on. We make a setup.msi file, zip it, send it to the client, and send an email of the changes. It takes several minutes to do by hand and it's easy to forget to update the product version, forget what changes are being deployed, or not notice what time it is and have to do it all when you realize it's time to go home. That's why the deployment (even if it's just deploying for testing) should be automated and that's where the real fun comes in. It took a few days to get it worked out but the entire process can now be done by an MsBuild target and here's what it does:
  1. Update the msi settings. A custom inline task reads a file, increments the ProductVersion, sets a new Guid for the ProductCode, then saves over the original vdproj file and updates source control. I later found out that many other people have had to do similar things.
  2. Build the solution. Do a full clean then rebuild the msi and all it's dependancies with devenv since msbuild doesn't like the vdproj project type.
  3. Zip the resulting setup files with today's date in the filename using the MsBuild Community Tasks.
  4. Copy the file to a shared folder.
  5. Update PivotalTracker. We've been trying out PivotalTracker and it's working so well for us that I can't recommend it enough. Not only is it incredibly easy to use and not only does it provide just enough project management to get the job done without getting in the way, but it's dead-simple to integrate with external tools. We have another custom inline task to use PivotalTracker's awesome web api to change every story with a status of FINISHED to DELIVERED. The response from PivotalTracker is a list of the stories that were delivered. Anyone looking at our project page in PivotalTracker will soon see what has been delivered and is ready for review.
  6. Parse the results. Using another Community Task, get the names of all the stories that were just delivered. As usual, someone on StackOverflow ran into the same problem I did and got a great answer.
  7. Send out a notification. Email me a link to the newly shared file with a list of the names of the stories that were delivered. I then tweak the email and resend it to the people involved in reviewing the changes.
And finally, the whole thing is an automated Jenkins job that runs at 5:00pm every day.


It's not perfect and there's a lot more that could be done (like more unit tests, other code inspection tools, and making our build tasks shared across multiple projects) but even while getting familiar with a new codebase and researching the various parts to this process, I managed to improve the project codebase and process while still providing value to the customer. And that, in my experience, is the real double win of relying on a Continuous Integration server:
  • Quick feedback that, when diligently followed, provides value to the company by making the codebase better and better every day.
  • Powerful tools that, when properly used, automate technical chores so we can spend more time providing value to the customer.

Tuesday, June 12, 2012

Scaling monster difficulty: iryrwarosh and beyond

More vague notes about rogulikes; this time about monster difficulty or Power Curve.

In I Rule, You Rule, We All Rule Old-School Hyrule there are two ways that creatures become deadlier over time: natural selection and big monsters. By natural selection I mean that every time something dies, a new random creature (monster or goblin) is born. This means that if you killed a bunch of Mountain Monsters there would be fewer of them and slightly more of everything else. If you avoided a certain type then there would be more of them since you're killing fewer of them. Subtle - but given enough time it would probably make a difference. Of course what new creature get's spawned is chosen at random so you can't have extinctions but you don't get a runaway top predator either. This also means that you're never safe or bored because the world is always crowded. The second way is that each new monster had a small chance to be a "big monster" with extra health and a few other perks. The chance of being born a big monster increases each time a new monster was created. If you played long enough, all monsters would be born big.

Both of these were affected by the fact that creatures who bump into each other attack each other. Imagine a region with two types of monsters: one type who always attack their neighbors and do extra damage, and another type that are pacifists. Over time the aggressive ones would dominate. If the passive creatures had spikes and heavy armor then the aggressive ones might end up impaling themselves into being the smaller population though. Likewise, if everything was killing everything else then you would soon see big monsters whereas a pacifist world would take much longer before they appeared.



I'm happy with those but there were a few other ideas that didn't make it into the game.

Goblins are scattered around and use a random weapon but my original idea was that they would belong to local clans that learn over time. If a clan found that spears did more damage than swords then new goblins to that clan would be more likely to start with spears. If light armor was better than heavy armor then they would tend to use that. This way each clan could have preferences based on their local monsters and they would tend to use weapons and armors that foil the player the most effectively. Also if something sucked then no one would use it.

The rivals are also a growing threat that could have been much better. They seek out hearts and heart containers so they tend to be healthy and gain hearts over time. This also means there's fewer power ups lying around for the player to pick up too. They don't use their items' special abilities though and can't pick up better items either. I tried to avoid having things be clearly better or worse but there are some exceptions. Spike Armor is clearly better than Heavy Armor but your rivals will just ignore it if they find it. They also don't learn. They're hard coded to fear some monster traits more than others - like poison, spiked, and double attack - but maybe there are traits they should fear more? Or ones that should be attracted to instead of afraid of? One rival feature that I'm proud of is that they don't know more about monster traits than you. They see the same monster description that you do. This means that monsters with the Mysterious trait (the ones that don't show what traits they have) are a complete mystery to them. If a player were to bump into a mysterious monster and get a notice that it's spiked, they would probably stop hitting it. Rivals don't learn though. Rivals that did learn would be a growing challenge.



Since the player can go anywhere in the overworld and I don't know which random monsters will be available, I can't fine tune stats and placement and difficulty ramps and experience tables the same way that most designers can. Instead I have to fine tune the rules and mechanics of how things tend to progress over time. Here's a game mechanic view of how iryrwarosh handles making opponents stronger over time.

A creature born for every creature lost. A slow positive feedback loop where things that are hard to kill tend to dominate in the long run. On the other hand, it could be exploited to make the world safer by only exterminating the most dangerous things. (like modern real life: few lions but lots of chickens)

Big monsters that become more common over time. A timer based positive feedback loop that can only be slowed by pacifism. On the other hand, a world with big monsters has more chances for fame so the game would end sooner.

Clans that adapt to use what is most effective in their daily encounters. It would have been a local positive feedback loop that would lead to clans optimized for what they've encountered. On the other hand, if their environment changes or the player's tactics change, they could fare poorly until they've had enough time to adjust. (again like our life - changing climate and changing politics have affected groups of people all the way back since when northern Africa was a lush forest)

Smarter rivals that intelligently use what they find. I think to truly be useful they would need to keep data from past games or play very cautiously until they understand what to fear and what to attack. It would be really cool though. I was only able to put a few hours into the AI for my 7DRL but my next roguelike will have more intellegent rivals that increase in difficulty over time.


Wednesday, June 6, 2012

Extract Method or Extract Class?

There's been a recent mini-revolution to what I think about refactoring.

For a few years now my most common refactoring was Extract Method. Got a big method? Extract sub methods. Got a deeply nested method? Extract each level into a new method. Got any other problem? You should probably look for methods to extract.

I've change my mind; these days I'm all about Extract Class, or more accurately Replace Method With Class. Got a big method? Cut out it's body and paste it into a new method on a new class. Got a deeply nested method? Well... I guess Extract Method still works better for that one. Got any other problem though? Just pick the largest method (or #region for you C# folks) and extract it to a new class.

Why the sudden change of mind? Recent real world experience that is best explained with bulleted lists within a matrix:



Extract Method Extract Class
Mess Unless there's a lot of duplication that you can quickly remove, the class size stays about the same.  Immediately reduces the class size by moving code to a new class. The compiler will tell you what it depends on and what depends on it.
Power You get a new method:
  • you can call it with different parameters
You get a new class:
  • you can instantiate it with different parameters
  • you can subclass and override it
  • you can pass it around
  • you can move related behavior to the same class
  • you can shift the burden of resolving dependancies to an IOC container or factory
Concepts Small, local, procedural:
  • encapsulates a local series of statements
  • tends to aid in procedural thinking
  • few design patterns are simple methods
Large, project-wide, object oriented:
  • encapsulates a bounded-context-sized role or responsibility
  • tends to aid in object oriented thinking
  • many design patters are objects
  • moves responsibility to the new class thereby increasing cohesion of the original class

or, in prose form: Extract Class immediately reduces the amount of code you have to worry about, improves cohesiveness of the remaining class, and gives you familiar object oriented techniques to work with.

Of course it's good to see if related methods could be extracted into a new class after extracting a method and it's probably good to break up the monster method after hiding it in a new class. I just found it much easier to make progress when I relied on extracting large methods to new classes first.