Monday, December 8, 2014

Mocking non-injected services with groovy

Sometimes we'll come across a piece of code that we want to test, but we're not able to inject a mock because the service is hard coded into the class. Consider this:

class Client {

    private final Service databaseService = new DatabaseService();

    def find(long id) {
       databaseService.find(id)
    }
}

Testing the find method would be difficult because the database service is not injected.

Groovy mocks and stubs can be used as categories for this case.

class ClientTest {

    @Test
    private testFind() {
        MockFor mock = new MockFor(DatabaseService)
        mock.demand.find { id -> someObject }
        mock.use {
           Client client = new Client()
           client.find(1L) // this will use the mock and return someObject
        }
    }
}

While injecting dependencies is easier, this is an alternative method when injection isn't available. See Using MockFor and StubFor for more details in the groovy docs.

Monday, October 13, 2014

Building fault tolerant applications with Hystrix

In any distributed system, failures will happen. Remote calls will fail, servers will go down and database calls will return errors. When these calls fail, it is important that the failures stay isolated and don't cascade throughout the system. With this in mind, Netflix built and open sourced their Hystrix library. They do a pretty good job of describing what it is:
Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.
In particular, Hystrix allows you to easily employ the bulkhead pattern to isolate calls to different 3rd party systems and to use a circuit breaker to prevent too many repeated calls to a failing system. Here's an example of a simple command that does a surprisingly number of complex things:
  • This call will be executed using a thread pool to isolate it. All hystrix commands with the group MyCommandGroup will use this pool.
  • This command will use a circuit breaker in case it fails. In this example, returning "Hello world" is not going to fail, but if it was a remote command it could.
  • If the command fails, it will automatically use the fallback method.
public class RemoteCommand extends HystrixCommand {

    public RemoteCommand() {
        // "MyCommandGroup" determines which thread pool to use to execute the commands
        // Use a different group to separate logically different groups of commands
        super(HystrixCommandGroupKey.Factory.asKey("MyCommandGroup"));
    }

    @Override
    protected String run() {
        return "Hello world";
    }

    @Override
    protected String getFallback() {
        return "fallback to me if run() fails";
    }
}
The properties of the command (timeouts, circuit breaker threshold, etc) are highly configurable and can be tuned to your needs.

If you need a battle tested library for implementing these patterns, check out Hystrix.

Monday, July 14, 2014

Jenkins Job DSL Plugin

Jenkins has a ton of plugins available to it, some of which are more well known than others. One particular one that I've recently found to be very useful is the Job DSL Plugin. It allows you to use a groovy DSL to script the creation of your Jenkins jobs. This is beneficial because it makes your jobs recreatable and avoids the pattern where you create a bunch of similar jobs and then they slowly diverge. It also allows you to easily rebuild the jobs in the event of data loss.

As an industry we're getting better at automating the creation of our deployment environments with tools like docker, puppet, ansible and chef, but tools such as the DSL plugin allow for a nice way to automate the creation of your Jenkins jobs. For actually configuring your build environment using docker and ansible, check out this post http://blog.sequenceiq.com/blog/2014/05/09/building-the-build-environment-with-ansible-and-docker/ by the team at SequenceIQ.

Tuesday, May 20, 2014

Debugging gradle jettyRun in IntelliJ

I was trying to figure out how to attach a debugger to the jetty server launched by running an app from gradle jettyRun. It turned out not to be so straight forward, but here's what I did.

Step 1 - Upgrade to IntelliJ 13 if you're not there already. The gradle support has significantly improved.

Step 2 - Create a new Jetty Server (Remote) configuration. Gradle uses an embedded jetty server, but you need to point IntelliJ at the home folder of a Jetty installation. I downloaded Jetty 6.125 (Gradle 1.10 uses Jetty 6, this may change with newer versions) and pointed at that.

Step 3 - Configure the Jetty Server. I used the following values, but your may change them as needed.

Server tab:
- Application Server: Jetty 6.125
- JMX port: 2099
- Remote Staging type and host are both Same file system
- Remote connection (where your app is running) defaults to localhost:8080

Startup/Connection tab (Select Debug configuration):
- Port: 52252 (default)
- Transport: Socket (default)
- The window will show the parameters you will need to pass to GRADLE_OPTS (or however you get properties to gradle, such as through gradle.properties). These properties are in addition to other properties in gradle.

Step 4 - Create (or modify your existing one if you have) a jetty config file and point your jetty tasks at it as shown in http://blog.james-carr.org/2011/12/20/enabling-jmx-in-gradles-jetty-plugin/. I did not have a jetty-env file.

Step 5 - Start your gradle process from the command line with the following opts (or again here, in gradle.properties):

export GRADLE_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1099 -Dcom.sumanagement.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -DOPTIONS=jmx -Xdebug -Xrunjdwp:transport=dt_socket,address=52252,suspend=n,server=y -javaagent:/opt/idea-IU-133.1122/plugins/Groovy/lib/agent/gragent.jar"


Note the -Xdebug and opts after that come from the arguments in the Startup/Connection tab in Step 3.

Step 6 - Once your app has started, start the jetty configuration from IntelliJ in Debug mode. Run your webapp, and your code should now stop at breakpoints in IntelliJ. Very many thanks to the StackOverflow's @CrazyCoder (http://stackoverflow.com/questions/14825546/deploy-debug-remote-jetty-with-intellij-12) and James Carr for his blog post referenced above.

Thursday, March 13, 2014

Avoid flag parameters in public methods

I was recently reading some code that I wrote a while back, and it looked something like this:
run(true);
And then later on in the code I saw
run(false);
At the time I wrote this, it was very clear what this meant, and I was saving myself time by having a single method where a flag would slightly alter the behavior. Reading this code several years later, it wasn't clear at all what it was doing. So how did I write it so it was easier to read? Instead of having the flag in the public method, just have two separate methods, and bury the flag in a private method. After refactoring, my code now looks like:
runSynchronously();
and
runAsynchronously();
Internally, there is still a run method that takes a flag for running asynchronously, but the public API is now a lot easier to understand.

Tuesday, March 11, 2014

Effective Code Reviews

There are numerous ways to conduct code reviews, ranging from the very formal to a constant review with pair programming. In this post, I'll discuss techniques that I have used for code reviews and what I expect to get out of a code review. In an environment where there is not pair programming, to me, code reviews are a must. I've seen huge improvements in codebases from doing reviews, resulting in both less code being written as well as cleaner code.

There are several reasons why code reviews are important...

They do find some bugs - not all of them, especially if the reviewer is not particularly familiar with that area of the code - but they help to weed out some of the obvious ones. Even the best developers make mistakes and having a second set of eyes can help. You can use automated tools where possible to filter out some things, but another look at the code is invaluable.

They help familiarize reviewers with areas of the code they don't know as well. As a system grows larger, not everyone will know every part of the code. By doing reviews, you can widen your understanding of the code, and you may find code that is useful for something else you're working on later on in time. For example, I've worked on larger codebases where you see similar utility methods in several different places because people didn't know they exist.

They socialize conventions. As you review other people's code and vice versa, you will start to pickup conventions that other developers use. Over time, you can end up with a more consistent looking codebase, making it easier to read, regardless of the author.

They are learning experiences. I've often been reviewing someone else's code and learned something I didn't know, such as a technique for doing something in whatever language you're using.

Peer pressure helps produce cleaner code. If I write something poorly, such as not well tested or well documented, I know the review will be rejected. While I like to think I always write everything the best I can, this is a nice reminder that other people will be reviewing the code.

How to conduct a review...

We've all probably done the big formal, ceremonious review at one time or another. Several days before the review, someone sends out the code to be reviewed or even prints it out and drops it off at your desk. You're expected to come to the review having reviewed the code already. In these situations, it has been my experience that only a small percentage of the people do a thorough review, and many reviewers simply show up without having read the code ahead of time or noting very trivial things like "this comment is misspelled." I think these types of reviews are generally a waste of too many people's time.

Currently I use Atlassian's Crucible to do reviews. When I am ready for my code to be reviewed, I select one or two people and add them to the review. They receive automatic notifications that they have a review waiting and they have some amount of time to complete it. The reviewers add comments to code and can optionally raise defects in JIRA. With this setup, people can review at their own pace. Similar to one benefit of code reviews, the peer pressure of having your comments publicly visible tends to make for more through and thoughtful reviews. I try to avoid long threads of comments on the review because tone can often be misconstrued. Any more than 1 or 2 comments, and I'll just have an in person discussion.

These are just my experiences with code reviews, and yours may vary. Feel free to comment to add to or disagree with anything I've said here.