ZSH: Emoji Analog Clock

Just about any example of a of customized ZSH prompt that you’ll find on the internet has time shown.

PROMPT='time: %T > '

I can’t recall the last time I noticed or gained any advantage of having the time printed in the terminal.

Lots of other customized ZSH prompts also have some emoji characters to spice things up. Skulls 💀, ghost 👻, etc. That’s easy enough to add.

ghost=$'\U1F47B'
PROMPT='time: %T $ghost >

Screenshot from 2018-06-30 15-04-03

You can also directly paste an emoji into your configuration file, but I didn’t like the idea of having none regular text in my configuration files. Also, I’d like to make the emoji on my prompt a little more dynamic. And this is where the discussion of putting time into your custom prompt comes back.

The full set of o’clock emojis are defined between 0x1F550 to 0x1F567 and looks like this.

Screenshot from 2018-06-30 15-25-58

It first defines top of the hour from 1 to 12 then the half hours from 1 to 12. So with a little bit of script we can get the command line to show us an analog clock of the current time by simple getting the hour and increment from the first hex value.

current_clock_emoji() {
    hour=$(date +"%l")
    echo -n '\U'$(([##16] 0x1F550 + hour - 1))
}

A couple of note about this code: The %l gets the hour from 1-12 without padding, the #16 tells ZSH that you want to do hexadecimal arithmetic, and the extra pound(#) symbol says to not output the base prefix (ZSH arithmetic).

Now call the method when setting up your prompt. Pretty straight forward.

PROMPT='time: %T $(current_clock_emoji) > '

Here is what my final prompt looks like.

Screenshot from 2018-06-30 15-39-05

I didn’t bother changing the clock for every half hour because that sort of granularity probably won’t be helpful. Here is a solution that changes the emoji on the quarter hours.

Extra

Below is the code I used to display out all the clocks.

display_clocks(){
    for i in `seq 0 $((0x1F567 - 0x1F550))`;
    do
        echo -n ' \U'$(([##16] 0x1F550 + $i))
    done
}
Advertisement

Combining Two Objects in Java 8

Say you have a simple POJO:

@Getter @Setter
public class Person {
    private Long id;
    private String name;
}

You extend that POJO for a specific use case:

@Getter @Setter
public class SpecificPerson extends Person {
    private String role;
}

If you have a list of both instances how do you combine them?

List<Person> people;
List<SpecificPerson> specificPeople;

If both list contain a similar subset of people were Person has part of the data and SpecificPerson has the other part of the data a way to combine the two would be through a BiFunction. (This, of course, is a assuming you have a common attribute, like Id).

public static BiFunction<Person, SpecificPersion, SpecificPerson> fromPersonToSpecific =(person, specificPerson) -> {
    specificPerson.setName(person.getName());
    return specificPerson;
}

Use this in a list:

final Map<Long, Person> mappedPeople = people
    .stream()
    .collect(Collectors.toMap(Person:getId, Function.identity()));

List<SpecificPerson> s = specificPeople
    .stream()
    .map(specific ->fromPersonToSpecific.apply(mappedPeople.get(specificPeople.getId()), specificPerson))
    .collect(Collectors.toList());

My First Atlassian Plugin

Goal

An Atlassian Macro that will query Stash and return a list of valid branches from a particular repo, with the option to filter.

Basically, this rest call:

stash/rest/api/1.0/projects/{project}/repos/{repo}/branches?filterText=release

Problem

Where I work we have a couple dozen micro-services that may or may not need to be updated during a particular release phase. In order to not miss an artifact I wanted to make a way to get automatic exposure to the required deployment.

Recently, I had configured each one of our projects to align with GitFlow (using JGitFlow).  The workflow looks something like this: A developer makes update to develop branch and when they determine their fix/feature is complete the cut a new release using the JGitFlow tools. The automatic build sees this new branch, creates and deploys to the test servers and then QA gets around to it when they have time.  One of the advantages of the Maven JGitFlow plugin is that it only allows for one release branch at a time. The idea being that if someone looks at a repo and see that it has a release branch, then it needs to be part of the next production release.

We use Confluence and Stash, the simple solution would be to have a macro ask for release branches and display them on our release page. After searching I found one potential candidate that would display information from Stash on a confluence page. There were two problem with this plugin: 1) Even though it would allow you to list branches, it would not allow you to filter branches, and 2) it cost money.

So, what’s a developer to do? Write my own plugin. This turned out to be more of a chore than I expected. I won’t delve into my–mostly–negative opinions about the Atlassian developer community or the Atlassian documentation, but what I will do is give a brief rundown of what I did and what I learned.

Getting Started

Skip the blah, blah and go straight to the code.

Whenever I read the words, “just install the SDK”, I cringe a little inside. It never seems to be that straight forward and the Atlassian SDK is no different. The problem for me is that I do regular Maven development and that gets in the way the Atlassian’s flavor of Maven that is built into their SDK. I would rant about how the Atlassian SDK could be greatly simplified by using a Gradle-like wrapper feature, but that’s not the point of this blog. I hope here to give a short concise example of creating an Atlassian plugin and clear up some ambiguity that I found along the way. I’ll also briefly touch on some solutions to problems I had.

Installing

To install, use the standard documentation. When you go to run commands skip any troubles you might have by explicitly pointing the the SDK settings instead of trying to incorporate the Atlassian settings with what is probably you corporate settings that are already in your ~/.m2/settings.xml file.

atlas-run --settings /usr/share/atlassian-plugin-sdk-6.2.6/apache-maven-3.2.1/conf/settings.xml

This is a problem with Maven not allowing mirror definitions in profiles.

Running

Running confluence is pretty straight forward, but you’ll probably want to add a few developer flags to make debugging easier.

atlas-run --settings /usr/share/atlassian-plugin-sdk-6.2.6/apache-maven-3.2.1/conf/settings.xml \
--http-port 1990 -Datlassian.dev.mode=true \
-Datlassian.webresource.disable.minification=true

After you have confluence up and running, you’ll inevitably do some configuration of the app. If you’re working with two apps you’ll inevitably create a link between the two. This can be a tedious process (watch out for case-sensitivity when defining url links). At about a day of when you first started the app it will complain about licensing. The way to fix this is to run an atlas-clean and rerun the app. However that will destroy the work that you’ve done. To get around this you need to create a snapshot of your configuration using the atlas-home command. It will create a zip directory that you can refer to in your pom.xml file.

&amp;lt;productDataPath&amp;gt;${project.basedir}/src/test/resources/generated-test-resources.zip&amp;lt;/productDataPath&amp;gt;

There are two things to remember here: 1) The app must be running when you call the atlas-run command, so open another window in the same directory, and 2) if you make changes or addition to how you app is setup, you must rerun the atlas-home command–it’s not automatic.

For developing a macro that talks from one app to another (Confluence to Stash), you have to have both apps running and connected. There is supposedly a way to define multiple apps in your pom.xml file and run concurrently but I kept having trouble with that so I just created two projects, one for the macro code and one just to keep the configuration of Stash.

Structure of a Macro

These are the parts to a macro

  • Plugin definition (atlassian-plugin.xml)
  • Macro definition (xhtml-macro inside of the plugin definition)
  • Macro code (which is run during the running of the macro)
    • com.atlassian.confluence.macro.Macro
  • JavaScript (used when rendering the macro editor)
    • optional, used if you want the macro editor to have dynamic behavior…so not really optional.
  • Template (velocity template for generating the resulting macro on the page.)

I’m not going to go into creating a Hello World macro. The standard doc is just fine for getting you up and running.

Using JavaScript to get Dynamic Behavior in Macro Editor

This is where the real power to create dynamic behavior is. For my macro I wanted to get a list of available projects, let the user select a project, then get a list of available repos in that project and let the user select a repo and finally store that information to be used later when rendering the macro on page. The basic features found in the xhmtl-macro tag don’t come anywhere close to this. Here is a couple things I learned. Here is the reference file.

  • AJS.MacroBrowser.setMacroJsOverride has a slew of undocumented methods that allow you to hook into the creation lifecycle, such as  beforeParamsSet, beforeParamsRetrieved, fields, etc
  • beforeParamsSet and beforeParamsRetrieved are important to saving parameters and making sure that your code does not overwrite previously selected values.
  • AJS.$ is just jQuery. When you are wondering how to dynamically change the macro browser, do not search for “how to edit confluence drop down”. Search for “how to edit a drop down with jQuery”.
  • Confluence will combine all the JavaScript and put it into one file. Find that file in the browser developer console, put a breakpoint in your code and explore the other methods available. There is intentionally no documentation for this stuff.

Most of the code speaks for itself. If you do the Hello World example, then take a look at my project, it will fill in some of the blank spots.

https://github.com/Scuilion/confluence-stash-macro

Oracle: No Data Found

ORA-01403: no data found

This is an error that happens in Oracle when you try and stuff nothing into a variable via a select statement.

SELECT id INTO tempvar FROM person WHERE name = 'Steve';

Most places will tell you to work with this problem through exceptions:

BEGIN
    SELECT id INTO tempvar FROM person WHERE first_name = 'Steve';
EXCEPTION WHEN NO_DATA_FOUND
THEN
    -- what to do when no data is found
END;

This seems strange to me, as this is not an exceptional case. You know there is a possibility that the data you want won’t be there. So why don’t you just check for it in the beginning.

I’d like to propose a more explicit solution to the problem.

SELECT count(*) into tempvar from person where first_name = 'Steve';
IF temp_variable &lt;&gt; 0
THEN
    --we're all good
    SELECT id INTO tempvar FROM person WHERE first_name = 'Steve';
END IF;

Exceptions break the flow of a program. Do you really want to break out of what you’re doing or do you want to handle the missing row?

Edit:
Several days after posting I came across a stackoverflow question regrading how to drop a table that may or may not exist. The first (and accepted) answer recommends catching the “table not found” exception, which is what I recommend against. The second answer, however,
makes a similar assertion about using a conditional.

Comment on that answer:

+1 This is better because do not relay on exception decoding to understand what to do. Code will be easier to mantain and understand – daitangio

ZSH: How Deep Am I

If you are on unix and you haven’t tried zsh, I would recommend it. If you use zsh and haven’t tried oh-my-zsh,  I would recommend it. I won’t go into too much detail on oh-my-zsh and how it helps manage your zsh configuration, but, it provides two features that make command line work more pleasant. 1) Plugins for integration with things like git, gulp, ruby, etc and 2) a number of different themes.

One of the plugins I use is Gradle. I noticed that it hadn’t been updated for a while because it was missing some switches from newer versions of Gradle, so I updated it. It was a trivial update and while I was there I noticed that the plugin lacked the ability to auto-complete the gradle init command. That is a more complex patch and required me to start learning zsh scripting.

Writing zsh and testing various functions is pretty straight forward–write a function in a file or directly on the command line. Updating configurations (which is what oh-my-zsh manages) is slightly harder to test. The best recommendation that I found, was to just open zsh in zsh.

Screenshot from 2015-12-02 20-00-07

I would open up zsh nested, let oh-my-zsh do its work with my changes and then test. Now I have a shell inside a shell, if I type exit then I’m back to my base shell. If I type exit again then whatever terminal I’m in will shut down. If I got distracted and forgot what level I was at in my nested zsh, and exited, then I had to started my terminal all over again. This is slightly annoying, so I added a little script to my .zshrc file to help me track where I’m at.

export ZSH_LEVEL
let &quot;ZSH_LEVEL++&quot;
print -Pn &quot;\e]2;${ZSH_LEVEL}\a&quot;

function zshexit_shelllevel() {
let &quot;ZSH_LEVEL--&quot;
print -Pn &quot;\e]2;${ZSH_LEVEL}\a&quot;
}
zshexit_functions+=(zshexit_shelllevel)

Let me explain this real quick. Line 3 and 7 will allow us to change the the window title (that’s what the 2 is saying). zshexit_functions is a zsh builtin function that will get called when zsh is exited. So, when the .zshrc get source on start up it increments ZSH_LEVEL and when the zshexit_shelllevel gets called it decrements the ZSH_LEVEL, changing the window title every time.

It looks like this:

levelzsh

See that little “2” in the bottom right-hand corner.

I’m sure there are simpler/better ways to do this, but I got a kick out of learning some simple zsh builtins and display preferences, both of which are used heavily in oh-my-zsh.


 

Edit (March 19, 2016):

As expected, there is a built-in solution for this problem. Check out nath_schwarz solution on reddit.

Standup Notes

A while back I found a nice snippet of code that allows you to turn your browser into a quick note taking tool. Just create a bookmark with the following code and you can quickly open a new tab and start typing.

data:text/html, <title>Text Editor</title><body contenteditable style="font-size:2rem;font-family:Helvetica;line-height:1.4;max-width:160rem;margin:0 auto;padding:3rem;">

It ends up looking like the following:
OldStandupNotes

On occasion, when our Scrum leader was on vacation, I would end up running the morning stand up meetings. I wanted to take notes but didn’t want to bother writing out the state of 10 coworkers status, so I used my browser notes to type down quick reminders of what everyone was doing.  (Ticket numbers, requests for help, blocking issues, etc.) That way in the rare occasion when the boss would ask for an update or I would need a reminder to get something done for someone, the information would be there–unless I accidentally refreshed my browser, or restarted the browser or rebooted the machine…

Client-Side Storage

I’ve been wanting to experiment with client-side storage technique and decided to solve my note taking problem with IndexedDB. My first attempt at getting sticky data on the browser was using Web SQL, before I realized that it wasn’t a universally adopted browser feature. I won’t go into explaining IndexedDB as there are many good tutorials out there. This being the one I used. Here were the requirements for my little project:

  • Must be only one file and require no web server
  • Has to keep track of one days worth of notes
  • Must be able to update only one entry
  • Allow me to never take my hands off the key board while updating

The DB structure is simple, there is one object store called attendee and a cursor, order, to maintain the position of each entry after the browser is refreshed. There is one key-binding that works based on where you are in the list. If you are at the bottom and press enter it will create a new entry but if you are not on the last line of text it will jump to the next entry and highlight the value/status. If you add the Shift as a modifier it will go in the reverse direction. Check out this gif example.

out

If you’re paying close attention you can see the browser being refreshed and the data not going anywhere. Also, the order is maintained. Before I added the cursor and the order column, it would order in some alphabetic way. You can check out the code on my gist. You’ll notice it’s not feature complete. It would be nice if there were a way to delete or reorder entries.

Value Types or: Let’s add another type system to Java

Java has two major type systems, primitives types(eg. int, byte, etc) and class/object types(eg. java.lang.String). Primitive types provide concise storage of data with no attach methods, while class types provide the ability to extend the functionality of an object while sacrificing space. These type systems are on different ends of the space/performance spectrum and Value Types aim to get the best of both worlds.

Value Types are a JDK Enhancement Proposals(JEP) that will add a change to the JVM instruction set allowing “support [for] small immutable, identityless value types”. The “identityless” notion is the big difference here. In Java, the identity is there to support mutability, synchronization, and few other features. The identity comes with a cost to performance and increases the object’s footprint.

There are a number of features that could be gained from Value Types. Tuples (esp. nice when trying to return multiple value from a method), numeric (supporting more than the eight default Java primitives), native types (ie taking advantage of specific processor types), algebraic data types (see jADT for some good examples/rational) and iterator/cursor simplification.

Below is some examples showing two of the possible gains from value types: Reduction in footprint, and increase in performance.

Performance – Reductions in Opcodes

Take a look at this simple example.

public class Example {
    public static void main(String[] args) {
        Example example = new Example();
        int intPrimitive = example.intMethod(1, 2);
        Integer intObject = example.integerMethod(1, 2);
    }
    public int intMethod(int a, int b) {
        return a + b;
    }
    public Integer integerMethod(Integer a, Integer b) {
        return a + b;
    }
}

The code is adding the same two numbers together. The first time using the primitive int type and the second time using the Integer object. Lets first take a look at the byte code for the primitive method.

  public int intMethod(int, int);
    descriptor: (II)I
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=3, args_size=3
         0: iload_1
         1: iload_2
         2: iadd
         3: ireturn

Pretty straight forward. The method loads the value 1 (iload_1), then loads the value 2 (iload_2) and returns them after adding.
And now, the Integer version.

  public java.lang.Integer integerMethod(java.lang.Integer, java.lang.Integer);
    descriptor: (Ljava/lang/Integer;Ljava/lang/Integer;)Ljava/lang/Integer;
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=3, args_size=3
         0: aload_1
         1: invokevirtual #10   // Method java/lang/Integer.intValue:()I
         4: aload_2
         5: invokevirtual #10   // Method java/lang/Integer.intValue:()I
         8: iadd
         9: invokestatic  #5    // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
        12: areturn

It takes three more commands and the bytecode array is 4x larger. What is killing us here is the un/boxing of the Integer object. (Note: In this code we’re not using any Object features.) This is pretty basic knowledge but it’s neat to see in the bytecode.

Footprint – Header Size

Let skip past the size of an int (4-bytes) and go straight to Integer. Using jol–a tool to help analyze object layout–we can see the internals of a  workings object.

Running 64-bit HotSpot VM.
java.lang.Integer object internals:
 OFFSET  SIZE  TYPE DESCRIPTION                    VALUE
      0    12       (object header)                N/A
     12     4   int Integer.value                  N/A
Instance size: 16 bytes (estimated, the sample instance is not available)
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total

Notice that we waste 12-bytes on the header. That means that an Integer is 4 times the size of an int.

These little excesses add up after a while. In fact, this is how some algorithm classes make sure that when you do the programming assignments that you actually implement the algorithm and don’t use a library (or several libraries) to bypass learning how an algorithm is implemented. Think about it, if you have to sort an array of 100 ints with bubble sort and you instead use Collection.sort() then your program is going to be much larger than if you used the primitive int, not to mention slower.


Most of my understanding for this post comes from this blog and from the JEP description, which is long but interesting.

Introducing gradle-syntastic-plugin 0.3.6

I’m excited about this release of gradle-syntastic-plugin becauase it make integrating syntastic with your Gradle projects simple. The secret is getting it to work with the Gradle Plugins mechanism. Now the only thing required to setup a project is to add the following to your plugin DSL.

id &quot;com.scuilion.syntastic&quot; version &quot;0.3.6&quot;

Here is an example of what a simple Java project would look like in Gradle.

plugins {
    id &quot;org.gradle.java&quot;
    id &quot;com.scuilion.syntastic&quot; version &quot;0.3.6&quot;
}

Before this release it was a requirement to include jcenter in the repository block and add the dependencies to the buildscript classpath, but no more.

The library is still published on bintray so if you need to use the old style it is still supported.

Release Notes: https://github.com/Scuilion/gradle-syntastic-plugin/releases/tag/v0.3.6
Example Usage Project: https://github.com/Scuilion/documenter/blob/master/build.gradle

Handling Permissions in REST

Figuring out how to handle permission when designing a REST API can get confusing. Especially if you don’t have names to describe your permissions. I had gone through most of the permission use cases and was trying to convey the requirements to our contractors. This is when I found a great doc on authorization for Apache Shiro. It gives names to the different levels of permissions. Giving good names to the concepts I was trying to convey solidified the ideas in my head (and on paper).

Shiro described permissions in three levels:

• Resource Level – This is the broadest and easiest to build. A user can edit customer records or open doors. The resource is specified but not a specific instance of that resource.

• Instance Level – The permission specifies the instance of a resource. A user can edit the customer record for IBM or open the kitchen door.

• Attribute Level – The permission now specifies an attribute of an instance or resource. A user can edit the address on the IBM customer record.

This is a short write up on how the permission ended up looking in my design. I’ll demo it with a made up User Resource that looks like this:

{
    'firstName':'kevin',
    'age':'33',
    'hireDate':'1276304738',
    'nickname':'kevino',
    'salary':'10k'
}

There are three types of data in this object. General data like, firstName and nickname, sensitive data, like salary (I need a raise), and fixed data, hireDate.

Resource Level

Resource Level Permissions are about how to access groups of resources. The ability to view, create or delete whole resources. The following endpoint responds with the expected list of data and a _links object. The _links object indicates that the caller can create a new user, or view the detail of a user but not delete a user. Also, notice that there is no indication that they can edit a user. That information comes in the Instance Level Permissions and hinges on the ability to view a user. (It seems like the caller of this endpoint is a manager because he can see the users salaries.)

{
    '_links': {
        'self': { 'href': '/app/services/users' },
        'next': { 'href': 'app/services/users/page=2' },
        'actions.new':{'href': 'app/services/users/{id}' },
        'actions.view':{'href': 'app/services/users/{id}' }
    }
    'users': [{
    'firstName':'kevin',
    'age':'33',
    'hireDate':'1276304738',
    'nickname':'kevino',
    'salary':'10k'
    },{
    'firstName':'samuel',
    'age':'37',
    'hireDate':'998875957',
    'nickname':'sam',
    'salary':'20k'
    }]
}
Instance Level

Instance Level Permissions are seen when looking at a single resource. The example below shows that a user can edit all fields in the user object. But what if editability is limited to certain fields. (Notice that the servers response has replaced the {id} because it gives you the exact users URL for edit.)

{
    '_links': {
        'self': { 'href': '/app/services/users/5' },
        'actions.edit':{'href': 'app/services/users/5' },
    'firstName':'kevin',
    'age':'33',
    'hireDate':'1276304738',
    'nickname':'kevino',
    'salary':'10k'

}
Attribute Level

This is where I feel permission don’t fit nicely into REST principles. But because the permissions are isolated and only found at this level, it doesn’t bother me much.

There are two types of Attribute Level Permissions to take into consideration. 1) Ability to view and 2) ability to edit. Notice the permissions object show that the caller can’t edit the hireDate. Also notice that in order to prevent the caller from viewing the salary value, the server simply does not send that data back. In a lot of designs I’ve seen internally, the developer relies on the UI to block out invisible values. Even though this is “good enough” 99% of the time, it is better to set the standard that any value not viewable to a particular caller should not be sent. Key as well as value. Then you won’t accidentally send up someones salary to their co-worker.

{
    '_links': {
        'self': { 'href': '/app/services/users/6' },
        'actions.edit':{'href': 'app/services/users/6' },
    }
    'permissions':{
        'hireDate':'false'
    },
    'firstName':'samuel',
    'age':'37',
    'hireDate':'998875957',
    'nickname':'same'
    }
}

Note: I’ve trimmed the links in href but I believe that your service should return the full URL. An html page wouldn’t give you part of the URL, why should your API.

Update:
Here is a nice discussion on reddit about an alternate way of handling permissions using roles. This is a much simplified solution if you can limit the number of roles a system has to maintain.

Refactoring : Logical Operators Instead of Conditional Flow

I’ve recently been reading Working Effectively With Legacy Code by Feathers (an oldie, but a goodies). The book is good for enumerating techniques of refactoring but more importantly, it is a reminder to be more observant when looking through old code, and that is exactly what happened to me this weekend. I was trying to familiarize myself with how our application used OSGi inside of a web container when I came across the following code. It looks as if it had been written in stages. Something akin to: “Try it this way. Oh, that worked here but not there. Add another thing…oh, and there is another use case.” At any point, when the programmer realized they needed to add another test case, they should have use the Sprout technique. After all, it doesn’t take much to create a method and put the code down a few lines. There are several other concerning thing about the code. I’ll enumerate them below and rewrite the method a couple of times.

public boolean Startup() {
    boolean started = true;
    // first try
    String osgiServletURL = URLUtils.getUrl() + BRIDGE_URL;
    started = callBridgeServlet(osgiServletURL);

    // recovery part 1  
    if (!started) {
        InetAddress thisIp = null;
        try {
            thisIp = InetAddress.getLocalHost();
        } catch (UnknownHostException e) {
            started = false;
        }
        if (thisIp != null && thisIp.getHostAddress() != null) {
           String thisIpAddress = thisIp.getHostAddress().toString();
           osgiServletURL = createOSGIServletURL(thisIpAddress);
           started = callBridgeServlet(osgiServletURL);
        } else
           started = false;
    }
    // recovery part 2 
    if (!started) {
        osgiServletURL = createOSGIServletURL(LOCALHOST_NAME);
        started = callBridgeServlet(osgiServletURL);
    }

    return started;
}
  • Boolean success variables: The started variable is hard to keep track of especially with all those conditional branches. (Should this variable be initialized to true‽)
  • Nested conditional statements: Not only that, but there are missing brackets around the last else statement, making it harder to read.
  • The method is making three attempts to connect to the servlet and in one case it’s checking for an exception but not in the others, making the code asymmetrical?
  • A variable that changes meaning: Notice how osgiServletURL holds values that mean different things as it goes through the method. At one point it’s a URL at another it’s an IP address.

Let’s do a little clean up in our first pass. Eliminate unnecessary comments (the code will be obvious when we get done), pull out method-wide variables, and if you have to use a boolean success variable make sure you initialize it to the false, otherwise you’ll constantly be setting the value and you’re likely to miss a case.

public boolean startup() {
    boolean started = false;
    String osgiServletURL = URLUtils.getUrl()+BRIDGE_URL;
    started = callBridgeServlet(osgiServletURL);

    if (!started) {
        InetAddress thisIp = null;
        try {
            thisIp = InetAddress.getLocalHost();
            if (thisIp != null && thisIp.getHostAddress() != null) {
                String thisIpAddress = thisIp.getHostAddress().toString();
                osgiServletURL = createOSGIServletURL(thisIpAddress);
                started = callBridgeServlet(osgiServletURL);
            }

        } catch (UnknownHostException e) {
        }

    }
    if (!started) {
        osgiServletURL = createOSGIServletURL(LOCALHOST_NAME);
        started = callBridgeServlet(osgiServletURL);
    }

    return started;
}

Looking a little bit better. Let’s create some sprout method to clean up that ugly looking nested exception(canConnectWithUrl Line 2-4). We’re also going to create two more sprout methods so that they will have similar names(canConnectWithIpLine 7-17 and canConnectWithLocalHostNameLine 21-22). Ultimately, just a readability thing.

public boolean startup() {
    boolean started = false;

    started = canConnectWithUrl();
    if (!started) {
        started = canConnectWithIp();
    }else if (!started) {
        started = canConnectWithLocalHostName();
    }

    return started;
}

And, finally, to get rid of the if statements and the boolean success variable, let’s use logical operators:

public boolean startup() {
    return canConnectWithUrl() 
        || canConnectWithIp() 
        || canConnectWithLocalHostName();
}

Java will call each method in turn and return as soon as the first one is true. So we have code that reads like this: “Can it connect using the url or can it connect with an ip address or can it connect with the localhost name”. If the code were written this way yesterday, I would have been able to read it, understand it, and move on to the next step within a minute. (Not to mention that the refactored code will be easier to test.)

This is not a complex case study, more similar to the type of pointers that I would give during a code review. Hopefully it will stand as a reminder that taking a second look at code can go a long way in readability and maintainability.