Annotation Processing in Java

Java 8 introduced Type Annotations. This means you can do something like the following:

public void processChange(@NonNull Boolean set){}

Java points out that it doesn’t supply a type checking framework but this got me to thinking. How can we as a company use annotations to make better internal code? Of course we all use annotations on a regular basis but how can we indicate to our coworkers our intentions and reduce bugs. This is one of the thought behind Design by Contract. I didn’t want to get into writing a full blown type checking framework but did want to understand the basic built in Java annotation mechanism. And so the following represents a tech demo project that I used to understand annotations better.


Say you have a project—an open source project, a tutorial project, or even a new framework—that you’d like to make more sense out of for users. It would be nice if you could annotate pieces of the code and generate a report (or more preferably an IDE plugin) that would point to and describe that piece of code. So I created a library that allows you to supply a message and a priority of that message to parts of the code. The idea being that you could run you annotations on a project and it would create an ordered list of key points in the code..

Below is a trivial example of how one would use Documenter.

static void addTask(Project project){
    @Document(key="Must declare extensions in RunSimpleExtensions class", priority=1)
    project.extensions.create("runSimple", RunSimpleExtension)

    project.task("runSimple", type: JavaExec ) {
            @Document(key="First use of extensions", priority=2)
            main = project.runSimple.mainClass
            classpath = project.sourceSets.main.runtimeClasspath
            @Document(key="Second use of extensions", priority=3)            
            args = project.runSimple.args


Creating the annotation is trivial. Just use the @interface annotation type definition. The two things to consider when creating an annotation are (1)where you want to use it (the target) and (2) how long you want it to stay around (the retention policy). In the Document example I use a RetentionPolicy of SOURCE, which means the annotation will stay around for the shortest amount of time. The most useful thing would be for the annotations to be use in all possible places. If you don’t specify the Target or the Retention Java assumes that you want to keep the annotation and that you can use the annotation on any allowed type.

import static java.lang.annotation.ElementType.*;

public @interface Document{
    int priority();
    String key();

Service Provider

Let’s take a detour and talk about the Service Loader framework in Java. The Service Provider Interface was designed to allow a third-party to add functionality to an application. To do this, a user would implement an interface and put it on the classpath, then point to it in the META-INF file. It was the responsibility of the Service Provider to use the Service Loader to grab the classes from Java. Here is an example:

private ServiceLoader<Documenter> loader;

private DocumeterService() {
    loader = ServiceLoader.load(Documenter.class);
    Iterator<Document> documents = loader.iterator();
    while(documents.hasNext()) {
        Document d =;

(This can be seen as a basic implementation of Dependency Injection.) Annotations are handle in a similar way, but instead of using the Service Loader, Java will pass the annotations to a class that you create that extends the javax.annotation.processing.Processor interface.

Abstract Processor

If you have created a javax.annotation.processing.Processor file and put it in your the META-INF/service/ folder then Java will call the class/es that you list in that file. The following is part of the AbstractProcessor for the Documenter project. (It is standard to extend the AbstractProcessor over directly implementing the Processor interface.)

@SupportedAnnotationTypes({ "com.scuilion.documenter.Document" })
public class AnnotationProcessor extends AbstractProcessor {

    public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
        Map<String, Note> documents;
        documents = new HashMap<>();
        if (!roundEnv.processingOver()) {
            Scanner scanner = new Scanner();
            scanner.scan(roundEnv.getElementsAnnotatedWith(com.scuilion.documenter.Document.class), documents);
        return true;
  • Line 1. Java will filter out all other annotations other than the one you specify. If you really want to handle all annotations in the system, then don’t declare a SupportedAnnotationTypes
  • Line 4. Round Environments hold the annotations that are available for processing.
  • Line 9. Scanner is an extension of ElementScanner class. Calling scan will pass the annotated elements to the appropriate visitXYZ method. For instance, visitPackage will be called when a package level annotation is used. (Have you ever scene a package annotated?)

The process is explicitely using the visitor pattern. I won’t got into the ElementScanner you can see my example on github. The result of what was done is to process each annotated position and output the type and the class the element was located in.

Weld CDI: User Injected Functionality

Here is the scenario: Suppose you are creating a library that will aggregate data from a system and send that information to standard out. A user may want to use your library to aggregate the same information but wants to send the data elsewhere (eg, a db, a file, etc).  How do you provide these feature to you library without the user having to explicitly create your object and pass in a writer? (Similar to creating an extensible application or plug-in.) You could use Java’s Service Provider but then you would have to programmatically exclude default behavior when users implement their own.

Recently, I’ve been working on an creating an annotation library. In order to create an annotation library in Java, you have to extend an AbstractProcessor and declare that class as a javax.annotation.processing.Processor in META-INF/services. Java will pick this class up automatically, so I need a way inject a users implementation class without changing the code and I want to disable my default implementation when this occurs. In comes Dependency Injection (DI) in the form of JEE’s CDI service.

The following is an example of how to replace one bean implementation with another using CDI. I’m using the JEE reference implementation of CDI, Weld, in an SE fashion. So there is no need to run in a container. The example uses Alternatives to replace a default implementation with a user created implementation. You can find the working example on github.

The first thing is initializing Weld. Weld requires the base beans.xml file to be in the META-INF folder and because I’m not using an EE container, the container has to be generated manually (Note how the container is created and destroyed before the injected class is used.)

public class Producer {
    Writer writer;
    public someLibraryMethod() {
        Weld weld = new Weld(); 
        WeldContainer container = weld.initialize(); 
        writer = container.instance().select(Writer.class).get(); 

Writer is the interface that you and the consumer of you library will implement. You’re library will come with a default implementation ( If the consumer does not create their own implementation then Weld will load up this default.

public class WriterImpl implements Writer {
    public void process() {
        System.out.println("in default writer implementation");

And here is and example users implementation. All we have to do is use the @Alternative annotation to tell weld that we want to use this class as oppose to the default class to be injected.

public class ReplacementWriterImpl implements Writer {
    public void process() {
        System.out.println("in alternative");

Notice that the only thing different is some extra print statements. Let’s see how this works when running. I’ve set up test for the base and the consumer under com.BaseTest and com.ConsumerTest, respectively. If I run the base test, again with only the default implementation I get the following.

kmb-us-master109:weldit kevin.oneal$ ./gradlew :base:test

com.BaseTest > testSomeLibraryMethod STANDARD_OUT
    in default writer implementation


Total time: 4.089 secs

And when we run the consumer test, we get the following.

kmb-us-master109:weldit kevin.oneal$ ./gradlew :consumer:test
:base:compileJava UP-TO-DATE
:base:processResources UP-TO-DATE
:base:classes UP-TO-DATE
:consumer:processTestResources UP-TO-DATE

com.ConsumerTest > testSomeLibraryMethod STANDARD_OUT
    in consumer
    before called to producer 
    in alternative
    after called to producer 


Total time: 3.971 secs

You can get this project from github under the tag v0.1.

Ultimately, the difference between using the Service Provider or Weld’s interpretation of CDI is that with DI I can eliminate the very trivial default implementation. Although I haven’t tried it, Weld is suppose to allow you to mimic the feature of Service Providers where Java will pick up all implementation of an interface/abstract, just by changing the @Inject variable to take a list.

Using Gradle Setup Info Outside of Gradle

There is a Dev/Ops at my company that is always trying to be clever. His ultimate goal is reduce build times, so I can’t get upset. Recently, he was trying to figure out what project were building and where they were located. He generated this list by making some assumptions about how our Gradle build was determining subprojects and created a script to generate a list of those locations. The problem with that is, if the scheme for subproject generation changed he would have to manually maintain his script.

In comes a little cheat.

Most project that you work on will have more than one subproject. It is good to separate concerns even if all the parts of the project are required to work together. This eliminates pesky problems like circular dependencies, full recompilation, etc. (See, Chapter 57.5) You can use the settings.gradle file to setup a multiproject build. A typical settings.gradle file looks like so: (Actually, this one comes from the Gradle github page.)

include 'distributions'
include 'modelGroovy'
include 'cunit'
include 'platformPlay' = 'gradle'
rootProject.children.each {project ->
    String fileBaseName =
    String projectDirName = "subprojects/$fileBaseName"
    project.projectDir = new File(settingsDir, projectDirName)
    project.buildFileName = "${fileBaseName}.gradle"

This is typical of a multiproject build–create a bunch of subprojects and set their properties. In fact, instead of listing each subproject individually, you can set a standard location for projects and resolve them dynamically.

new File("$settingsDir/subprojects/").eachDir {

Where include is the method to specify new subprojects.

The goal of the Dev/Ops is to get the list of projects and their locations. There are two ways of accomplishing this task: 1) Create a task in Gradle (that is essentially only a task with a doLast block). 2) Or, use a Groovy script to read the settings file. The problem with the first option is that once your company wide Gradle system starts to grow, so does the configuration time. This configuration time is inconsequential with the respect to the rest of the build. But you might become impatient if all you want is a quick list of projects and project locations. The problem with the second one is that it hacky. But sometimes that fun.

I’ll assume you know how to use Gradle to get this task done and just show the groovy script.

The secret is Groovy Bindings. The script goes in the root directory next to settings.gradle file. It uses bindings to mock out the key pieces that are usually found in the settings file (settindsDir, rootProject, include, project). After creating the mock data you call evaluate on the settings.gradle file. In the end the fake Project object will hold each project.

Binding binding = new Binding()
def workingDir =  new File().getCanonicalPath()

binding.setVariable("settingsDir", workingDir)

binding.setVariable('rootProject', ['name':''])
def include = { component ->
def projects = []
def project = {projectName->
    def currentProject = new Project(projectName)
    projects << currentProject 
    return  currentProject 
binding.setVariable("include", include)
binding.setVariable("project", project)
GroovyShell shell = new GroovyShell(binding)

shell.evaluate(new File(workingDir, 'settings.gradle'))

class Project{
    def name
    def projectDir
    def buildFileName
    def Project(def name){ = name
def getProjects() {
    return projects

Debugging Vim Plugins With strace or: Remembering the Past


Syntastic has several ways for you to define your classpath when using javac. The way that I’ve chosen to manage my workspaces is by using the dot config file (.syntastic_javac_config). Syntastic will look for a global Vim variable, g:syntastic_java_javac_classpath, in the config file located in the pwd . I’d previously gotten syntastic to compile correctly using my config file but when I moved to another computer it inexplicably stopped.  Learning how to write or debug Vim plugins is not on the top of my to-do list. Recently a friend had linked me to some bloggings about strace. This seems like a perfect time to learn a new tool.

The following is a dry recounting of what I did to debug the issue.


There is not a way, that I know of, to running Syntastic directly, so I need to trace through Vim. In order to do this I have to trace the child processes of Vim by attaching to it’s pid.

ps -ef | grep vim

You can have strace attach to an already running process by using the p option(in the example the pid is 24452).

sudo strace -o strace_out -fp 24452 -s 2048

When I save a .java file in Vim, strace will kick off a javac process. In order to see what Syntastic is doing, all the children processes needs to be logged by using the f option.

The output has been set to strace_out. In this file we’re looking for an exe process that calls out to javac. A quick search of the file (%s/execve.*javac//gn) shows this javac process being called and most importantly the classpath being passed in (cp option). (Whitespace added for readability.)

25019 execve("/bin/bash", ["/bin/bash", "-c", "(javac -Xlint -d /tmp/vim-syntastic-javac -cp
/home/kevino/projects/component-dependency-grapher/src/main/java/com/scuilion/dependencygrapher/neo4j/node/ 2&gt;&amp;1) &amp;&gt; /tmp/vM7OMl8/21"], [/* 68 vars */]

I’ve verified the classpath in the Syntactic config file. The call that strace shows, is including all the local source folders but none of the jar files. My next thought was that the plugin was not finding my config file or that the plugin was trying to validating each file in the classpath and failing. It appeared that it does check each file but it doesn’t have a problem finding them.

24452 stat("/home/kevino/.gradle/caches/modules-2/files-2.1/org.neo4j/neo4j/2.1.2/de24992e14593667756c8042f6a26c6c6ff41271/neo4j-2.1.2.jar", {st_mode=S_IFREG|0664, st_size=24047, ...}) = 0

stat(...) = 0 means that the file was found and there was no error code returned.


So, what is going on here? I’d actually discovered and solved this problem before (hence, “remember the past”). Syntastic uses wildignore in the .vimrc to exclude certain file types. So does the Vim plugin CtrlP. When I search through a project using CtrlP I don’t want jar files to be returned in the results so I had added .jar extensions to wildignore.

Even though I had solved this issue before it was a great starter problem to experiment with strace. I’m looking forward to using strace to bolster my debugging skills.

Gradle – Create a New File From Text

This feels like a feature that should be baked into Gradle or at least common, but nothing comes up when I search how to create a file on the fly. The requirements:

  • Create a file where one does not exist from generated text
  • Incrementallity

The following is my solution:

project.task('createFile') { "myTextProperty", "any old text"
    outputs.file outputFile
        outputFile.write ''

Using the inputs and outputs makes the task incremental and the doLast is required if you don’t want the file to be created during the configuration phase. You might also need to set the input property in the doLast block if the information is not available until execution. This was not necessary the last two time I was creating a file: Creating a version file from a hard coded string, and the last time I was using the Gradle classpath to generate a file.

How do you create a file on the fly?

Here is another solution I found.

First Experience with Bintray

When I started to learn Gradle, I wrote a simple plugin. It was a fairly useless adapter for JavaExec. It automatically set up the classpath and created an extension for pointing to the main class. This was a exercise.

project.extensions.create("runSimple", RunSimpleExtension)

project.task('runSimple', type: JavaExec ) {
        main = project.runSimple.mainClass
        classpath = project.sourceSets.main.runtimeClasspath

Recently, I’ve been beefing up my development process in Vim and installed Syntastic. This plugin provides syntax checking by running external checkers, two of which I needed–JSHint and javac. Out-of-the-box, Syntastic works great with Java, until you start adding external libraries. Fortunately, I use Gradle on all of my projects and Gradle makes it easy to determine you dependencies.

project.sourceSets.each { srcSet -> { dir ->

So I added this functionality to my original plugin and called it gradle-utils. The problem was the hassle of using the plugin from one computer to the next. I’d have to pull the project from GitHub and publish it locally (using the maven-publish plugin). Not to mention if I made changes the whole process would start over.

In Walks jCenter

This was a perfect opportunity to try out BinTray. I’d had an account, but other than signing up, it sat dormant. Here are a list of the things learned while uploading my first artifact.

  • Don’t forget you have to push up your source as well as the complied classes if you want to attach you package to the public jCenter repo. I’m using the gradle maven-publish plugin and accomplish that like so:
    task sourceJar(type: Jar) {
        from sourceSets.main.groovy
        from sourceSets.main.resources
    artifacts {
        archives jar, sourceJar
    publishing {
        publications {
            maven(MavenPublication) {
                artifact sourceJar {
                    classifier "sources"
  • Gradle 2.1’s new developer plugin index makes include the Bintray plugin a snap. (Example of this below.)
  • In order to include your package in the the publicly accessible jCenter you have to ask. It took me longer than I would like to admit to find how to do this. I assumed that the option would be located somewhere within the package you were attempting to release, but it actually on the home page of jCenterbintray

A Personal Plugin for Personal Use

This plugin is very “me” centric, but it’s really easy to get it setup, assuming you already have the Syntastic plugin working in Vim. There are two things you need, 1) set Syntastic so that it creates a config file, and 2) add the gradle-utils plugin to your build.gradle file.

1) .vimrc

let g:syntastic_java_checkers=['checkstyle', 'javac']
let g:syntastic_java_javac_config_file_enabled = 1

2) build.gradle

buildscript {
    repositories {
    dependencies {
       classpath group: 'com.scuilion.gradle', name: 'utils', version: '0.2'
apply plugin: 'utils'

Note: This is a post process plugin and should be applied at the end of your build file.

Screenshot from 2014-07-19 17:18:02
When junit is commented out of the build file, Syntastic shows that it can’t compile this particular file.

An aside: I used Gradle 2.1’s developer index to include the BinTray plugin. So instead of:

buildscript {
    repositories {
    dependencies {
        classpath "com.jfrog.bintray.gradle:gradle-bintray-plugin:0.5"
apply plugin: "com.jfrog.bintray"
plugins {
    id "com.jfrog.bintray" version "0.5"

Pretty cool!

Takeaways from UberConf 2014

As usual, the NFJS guys put on a very informative event. I figured I would write down my thought on the week before they get stale. This is my fifth event to attend but only my first UberConf. The added workshops was definitely the hook that got me to pester my boss to pay for this. It has to be a tough task to come up with a simple yet helpful/doable set a exercises, especially if the workshop is only half a day. Most of the workshops could have been twice as long; By the time the presenting part of the session was done there was little time to do actual work. (System setup was usually not handled in the best way.) In one class that I attended, the “workshop” title was just an excuse to talk longer. Fortunately, the long discussion was not bothersome. The following is a list in no order, just a data dump.

  • Neo4J – Six months ago I finished a small one-off project that would have benefited if I had used Neo4J. I wish that I had thought to use a graph database. Here in the next week I will rewrite that program. (Future post to follow on this.)
  • Apache Camel – This pops up just about everywhere. A month ago I consulted on how to setup an Apache ServiceMix project and several of the presenters were using Camel as part of their examples. I need to come up with a project, so I can get more hands on.
  • Continuous Delivery vs Continuous Deployment – This reminded me of the Shippable vs Saleable argument (or minimum viable product). My company’s product (at least the one I work on) is such a behemoth legacy app that even the discussion of Continuous Deployment is far down the road.
  • NGINX – came up in discussion several times. I don’t remember it being so popular before.
  • Clojure – I’ve deciding on which new language to delve into and plan on spending a good amount of time learning. For a while it was a toss up between Clojure and Scala. Clojure has officially won for me. The thought is that with Java 8 and it’s added functional features that I would go full on and learn Lisp as opposed to some middle ground. I’m using the following resources (along with the obvious Clojure references and cheat sheet maintained on Koans,, and Seven Languages in Seven Weeks.
  • OSGi – For several months now I’ve been dealing with a rewrite of our OSGi implementation, which was fragile to begin with and is out-of-date (It’s stuck on Java 6 and buckminster). Talking with one of the presenters I found out that I’d missed OSGi DevCon by only a couple of weeks. This would have been a tremendous resource at figuring out the last few missing pieces. It also appears that the DevCon people didn’t put up any videos, which would have been helpful. Guess I find out what the slides give me.
  • Pomodoro Technique – On Wednesday the conference went late–up until 10. I decided that I would take a session that was less technical since I had been stuffing my brain all day. A friend of mine had encouraged me to try out the Pomodoro Technique. It seemed like a good idea but I didn’t read his blog post about it and eventually forgot. After debugging my way through this session, I decided I would give it a try. In fact, my first two pomodoros were done the next night after the conference and hit the mark for time. An abnormality that I chalk up to beginners luck, as my next 5 were staggeringly underestimated.
  • Docker – Another thing that the aforementioned friend put me onto. I’d mostly let it go without further research because I’m currently stuck on Windows at work. The session was enlightening and the flexibility of Docker was reiterated in a preceding workshop where environment setup could have been drastically reduced with the help of Docker.
  • Bintray – I was reminded about my stale account. Hopefully I’ll be using jcenter as soon as I get the Neo4J project going.


  • Scalability Rules
  • The Design of Everyday Things – I’m halfway through this book. It’s my airplane book, and seeing as I don’t go on a lot of trips, it is taking a while for me to get through it.
  • Presentation Matters – This is co-authored by a couple of regular NFJS presenters. This is a soft skills topic that techy people seem to try and avoid, especially me. The last presentation I gave in front of my organization was about Java 8’s added annotations to type and repetitive annotations. Afterwards my boss commended me on my  presentation but suggested that I tell people who I am before jumping in.
  • How To Win Friends And Influence People – You hear about a certain book and after you hear about it a number of times, it just automatically gets put on your reading list.
  • Understanding Cryptography

Two soft skill books and two technical books added to my reading list. Plus two pages of bullet point task of personal work that I need to get done.

NERDTree and CtrlP in Vim

Finally carved some time out to tweak how I navigate code in Vim. I’ve had NERDTree installed but because I didn’t spend the time to learn the navigation keys, I ended up falling back on my old method for navigation. After learning the keys I set up a quick way of loading up the tree view (noremap \\ :NERDTreeToggle), and set all my bookmarks and show them by default(let NERDTreeShowBookmarks=1).

CtrlP is a great plugin for when you know your code base. I added a map key to reduce typing (let g:ctrlp_map = '<c-p>') and defaulted to name search instead of path(let g:ctrlp_by_filename = 1).

Integrating these two are where I find the most usefulness. I work with a bunch of branches, some in git and most in accurev. One of the projects I work on is actually buried down in the project structured. Because I have different configurations for projects the “root marker” features of CtrlP is not the best solution. But by having NERDTree change CWD whenever the root tree is changed (let NERDTreeChDirMode=2) and setting CtrlP to search under the current CWD (let g:ctrlp_working_path_mode = 'a'), then all I have to do when starting work on a certain project is to change my working directory (C hotkey in NERDTree or just selecting a bookmark) and CtrlP will automatically search in that project.

(Now I just need to get vim to automatically update saved files to my deployment area.)

.vimrc snippet for CtrlP:

"for CtrlP
let g:ctrlp_map = '&lt;c-p&gt;'
let g:ctrlp_cmd = 'CtrlP'
set wildignore+=*\\tmp\\*,*.swp,*.zip,*.exe,*.class,*.jar,*.html,*.xml
let g:ctrlp_root_markers = ['.acignore', '.gitignore']
let g:ctrlp_working_path_mode = 'a'
let g:ctrlp_by_filename = 1

.vimrc snippet for NERDTree:

" NERDTtree
let NERDTreeBookmarksFile=expand("$HOME/.vim-NERDTreeBookmarks")
let NERDTreeShowBookmarks=1
let NERDTreeChDirMode=2
noremap \\ :NERDTreeToggle

Groovy Syntax Highlight Update for Vim

The Groovy syntax highlighting that come with the Vim installation has a few thing that I don’t care for. As I transition more away from Eclipse as my main development environment, I’ll spend more time trying to get Vim setup to support more efficient development. Below I’ve enumerated what changes I made to the syntax file and why.


Bold, yellow highlighting of TODO and FIXME tags. I’m not a big fan of blaring reminders. It’s also not good to rely on just stumbling on a TODO as a way of indicating work to be done. If you really going “todo” it, go ahead and make sure you do more than make a tag for the next sap to come along. He might not have TODO and FIXME highlighted in his editor and might not notice your rushed programming.

Why Should def Be Any Different:

“def” is a type in Groovy; Similar to Integer, int, String, Array, etc. The word didn’t come up as highlighted and was getting lost when I would skim code.

Important Variable Obscured by Greedy RexEx

The match regex used for matching import statements was too greedy. It would highlight parts of the variables that had the word, import, in the name (e.g. importFileFromURL or someimportantmethod).

“””GStrings Deserve the Same Respect as String”””

The most common use in my code for GStrings is when I am lazily slapping xml into a test and want to be able to still read it and add variable replacement. The original Syntax file was only recognizing the first line of a GString.

The regex for matching import statements was taken from a Java syntax file. The GString update was from Tobias Rapp version 0.1.11 of the Groovy syntax file. I’ll maintain my version of the file on github with the caveat that some of the changes are skewed to my preferences and that I might add more. Hopefully, the README file will be kept up-to-date.

The Embedded Idea That Turned Into a Perl Script

My first purchase from SparkFun was a camera module (which they have since discontinued). The idea was to embedded it into a project.  When the device arrived, I thought, “Why don’t I just whip out a quick Perl module to test the device”. I already had the hardware to hook it to a computer and thought it would be an easy way to prove the device worked and find out what the quality of the photos would be. So I kludged together the hardware that I needed and got to writing some Perl.

The Hardware

  • USB to RS-232 convert (Cause, who really has a COMM port still on their computer)
  • RS232 to TTL
  • C328-7640

On my old computer I actually had RS232 ports due to all of the times I worked on projects that had debug serial signals but there are lots of options out there that are inexpensive and easy to use. Level converter, I have several lying around but again, choices, cheap, and easy. Now, you have to cut me some slack for the way the hardware looks, remember I was just trying for a quick proof.

The Hardware - Notice the Freescale eval board that is only being used for an easy, portable power supply(3.3V).

And even though it looks ‘spagatti-ish’, the zip tie and duck tape have held well and I haven’t had to scope the signals or debug connection issues.

The Software

It took about a couple of days to get my first full image. The module requires you to set the parameters (eg color type, resolution), set the package size, take the picture, ask for the picture (which returns the number of packages) and then read an acknowledge each data package. See the datasheet.

One of the First Successful Images - Full aperture/broad focal range and decent color fidelity.

After getting a good picture, I didn’t stop with the Perl. This pushed the work on well over the time I should have spent.  I kept saying things like, “let me make it easier to change the parameters on the fly”, “maybe I should make this more OO using the Moose module”, and so on so forth. So it took ~2-3 days to get an image and one year to move past.

The Video

I got the idea to take some time laps photography with this. So I set up my computer in the  shed about back and took a video from sun up to sun set.

Backyard Setup - Using the cinder block to hold the module adds to kludgyness of this project.

And here is the video. You can see there some data issues. Maybe in a year I’ll get around to debugging those.