Wednesday, July 29, 2015

Java FX Event Thread vs AWT Event Thread Brawl

So, I recently started working on a Java UI application focussing on my backend first, then the UI components.  I am not very good at making pretty swing UIs (very out of practice - but I was really active when Swing was in development - what was that, 1997?  :| ).  I have done a lot have dynamic web development over the last decade and am frankly shocked that swing UIs are so far behind.  A dynamic swing UI to match a trivial web interface would be a LOT of work.  So, my first cut at my UI was targeted at behavior and testing, which was a substantial effort - I would have thought automated unit testing in the user interface would be in a better state than it is.

Almost every substantive UI testing platform appears to be abandoned (uispec4j, abbott, others I can't remember anymore) so I decided to build my own support as I needed it (Not building a generified UI unit testing framework - just what I needed).  The one good thing that came out of this approach was I realized testing the UI through the UI - as a user would or a record/playback testing method - was a big mistake.  I was decomposing my user interface as any good developer would do, but my testing was completely monolithic.  Instead, I started testing components at the finest grain and my tests became so much faster and easier to write with less bulk.  I also wired components together to rely on messages between objects rather than having handles to each other.  Sound familiar?  That should have been my approach from the start.

At any rate, I was going along, and decided it was time to make the user interface better, and I thought it might be time to look at Java FX.  One of the major factors was Oracle choosing FX as the future direction for java, but also the ease with which you can externalize the building of the user interface into fxml files.  That is huge.  I can use a builder and not be tied to a tool like eclipse or IntelliJ - big win.

I made a refactor which permitted me to transition from swing to Java FX as it seemed desirable, and even encouraged with the addition of JFXPanel.  This seemed ideal since rewriting the entire UI all at once sucks.  Well, this sucks worse.  The Java FX Event thread and AWT Event thread are entirely incompatible - so threading in the UI is a nightmare.  Consider as an example a Swing Button action handler which needs to update the state of an FX node - you would have to do that asynchronously, you can't call PlatformImpl.runAndWait from the AWT event thread - it will hang a lot of the time.  But if your button action handler needs to wait until the FX node is complete updating - such as if the update triggers another event handler - you have to do it all asynchronously with a lot of calls back and forth into the different event threads and you are left with a real mess.

To compensate - I have had to convert all messaging in the GUI to asynchronous - which makes unit testing a major hassle because obviously test code needs to wait until things are done before making assertions.  The bottom line to me is not to do this.  Transitioning to Java FX should be done at least at the Frame level - or Stage in Java FX parlance - and don't let messaging in the GUI depend on passing between the different systems.

Tuesday, June 16, 2015

Maven dependency ranges must die!

When I first read about the capability of maven to loosely depend on an artifact using version ranges, I thought, "Wow, that's pretty cool!  Why doesn't everyone do that?"  After all, who wouldn't want the latest version of newly released awesomeness?  By way of example, this is how a dependency in maven usually looks, excerpted from Stefan Birkner's excellent JUnit system rules project:

<dependency>
<groupId>com.github.stefanbirkner</groupId>
<artifactId>fishbowl</artifactId>
<version>[1.1.1]</version>
<scope>test</scope>
</dependency>

This means your project depends on version 1.1.1, exactly, of something known as fishbowl in the group of things known as com.github.stefanbirkner.  The brackets around the version number are not usual, and are in fact redundant since they specify the same thing as <version>1.1.1</version> - that version and that version alone.  Going on further in the pom, we find this:

<dependency>
<groupId>junit</groupId>
<artifactId>junit-dep</artifactId>
<version>[4.9,)</version>
</dependency>

This construct is altogether different.  It specifies a dependency on the artifact known as junit in the group known as junit, but allows for any version greater than or equal to 4.9.  The bracket on the left is an inclusive operator and the parenthesis on the right is an exclusive operator.  The comma allows for any later revision.  By way of example, (4.9,5.0) would mean any version after but not including 4.9 but before and not including 5.0, and (4.9) would mean any version greater than and less than 4.9.  That wouldn't work at all.

This seems pretty powerful, but . . .

I have recently come 180° in my thinking about this.  I now hate them with a passion - for two very good reasons.

Awesome Reason I

This one is purely idealogical, which is not to say that it is without merit, it's just a caveat that it may be debatable.  If you depend, for instance, on apache commons-io 2.3, what reason do you have to expect it to work with 2.4?  Or even 2.3.1?  What assurances do you have from that developer community that 2.3.1 will not introduce a bug which impacts your library?  Certainly by version 2.4 the API might change and break compatibility with your code.  This is unpredictable and should not be part of a stable, release build.

Awesome Reason II++

This one is the biggie, and also exceedingly pragmatic.  The aforementioned system rules library, which while otherwise awesome, includes a dependency on junit 4.9 and onwards.  In a practical sense, though, depending on system-rules exposes my builds to periodic failures that are very difficult to track down or diagnose.  Just today I got this error again:

[ERROR] Failed to execute goal on project ssc-cli: Could not resolve dependencies for project org.bitbucket.bradleysmithllc.star-schema-commander:ssc-cli:jar:ssc.1.C: Failed to collect dependencies at com.github.stefanbirkner:system-rules:jar:1.9.0 -> junit:junit-dep:jar:[4.9,): No versions available for junit:junit-dep:jar:[4.9,) within specified range -> [Help 1]

What does this mean?  Why is it happening?  Well, after being plagued with this for a long time, I finally tracked it down.  Some transitive dependency of my project keeps retrieving crapped-up versions of junit, in this case 4.11-beta-1, which satisfies the dependency to be greater than or equal to 4.9, but is not an actual release - it is something that made it to central and then somehow into my repository.  My only (reasonable) recourse is to go to my local repository and delete the junit group tree (~/.m2/repository/junit) and rebuild.  Guess what happens next?

[ERROR] Failed to execute goal on project ssc-cli: Could not resolve dependencies for project org.bitbucket.bradleysmithllc.star-schema-commander:ssc-cli:jar:ssc.1.C: Failed to collect dependencies at com.github.stefanbirkner:system-rules:jar:1.9.0 -> commons-io:commons-io:jar:[2.0,): No versions available for commons-io:commons-io:jar:[2.0,) within specified range -> [Help 1]

Yep, you guessed it - another broken dependency range - this time apache commons-io.  Repeating the same process, I delete my cached commons-io artifacts (~/.m2/repository/commons-io) and now my project builds.

I do not know why exactly this keeps happening - and I am sure that there are many things that could be done to fix it - like track down bad dependencies and fix everyone else's poms, but that isn't an option in most cases nor is it even something I want to do.  My project has an explicit dependency on junit 4.11, which satisfies >= 4.9, so I don't know why it would consider any other version.  The bottom line is that there is no way to predict when a library will break compatibility with yours, and you should not try to anticipate when that will happen.  Builds must be stable and reproducible, and dependency ranges violate both of those goals.

Friday, June 5, 2015

Snob overflow . . .

Okay, another one of my many pet peeves. When people ask questions to Stack Overflow, the community often incorrectly assumes that you are too stupid to ask a real question and instead talks down to you and won't give a straight answer, even if there is one.

Here is a case-in-point: how-to-release-all-permits-for-a-java-semaphore

The question was immediately met with contempt: "What do you need this piece of code for?". This question was not asked for clarification, but so that the discussion could go in a completely different direction. The Stack Overflow moderators are such Nazis about intervening: "This is not a question!", "This is a survey!", "These are opinions!", "This is a duplicate!" - that I am surprised they don't get upset in this situation, because you have a clear question posed, and the selected answer goes in a completely different direction. Rather than answering the question "How to release all permits for a Java Semaphore", they answered the question "How do I better design my code to keep my thread pool in sync?"

The simple answer is:

semaphore.drainPermits();
semaphore.release(totalNumPermits);

Again, whether that discussion is valuable or not, it does not answer the very straightforward question that was asked.

Friday, May 8, 2015

Sonic, What's up with your foam cups??

I have been noticing for the past few months that my Route 44 styrofoam cups from Sonic in OKC have the annoying 'feature' that they leak through the pores in the foam. I have noticed this many times - and is it ever annoying. Here is a sample from the cup I am using right now. This cup is probably two or three days old, picked up at the Bricktown Sonic:
Those little drops look like normal condensation, but they are actually water escaping through the cup. I have to keep a paper towel under my drink at all times or else my desk will be all wet. In years past I could keep the cups for a long time and this would not happen. Very strange and not a little annoying.

Monday, May 4, 2015

Java Synchronization. What's the big deal?

Throughout my career as a Java developer, I have read a non-trivial amount of code related to minimizing the impact of synchronized code blocks.  The topic of thread safety is extremely complex, and must be embarked upon carefully.  In some cases, just declaring lack of thread safety can simplify a project immensely, E.G., java swing, but there are times when it can't be avoided without introducing a lot of complexity on the other side of the API.  That's not what I am referring to, however.  Consider double-checked locking.  This technique is often used simply to reduce the amount of time it takes to call an accessor.  To illustrate, I wrote the following chunk of code:

import java.math.BigDecimal;
public class Main {
 public static final long TEST_COUNT = 1000000000L;
 public static void main(String[] args) {
 long r = 0;
 long startSync = System.currentTimeMillis();
 for (long i = 0; i < TEST_COUNT; i++)
 {
 r += getSync();
 }
 long stopSync = System.currentTimeMillis();
 long startUnsync = System.currentTimeMillis();
 for (long i = 0; i < TEST_COUNT; i++)
 {
 r += getUnSynch();
 }
 long stopUnsync = System.currentTimeMillis();
 BigDecimal syncTotal = new BigDecimal(stopSync - startSync);
 System.out.println("Sync took (" + syncTotal + ") ms., or (" + (syncTotal.divide(new BigDecimal(TEST_COUNT)).multiply(new BigDecimal(1000))) + ") microseconds average.");
 BigDecimal unsyncTotal = new BigDecimal(stopUnsync - startUnsync);
 System.out.println("Unsync took (" + unsyncTotal + ") ms., or (" + (unsyncTotal.divide(new BigDecimal(TEST_COUNT)).multiply(new BigDecimal(1000))) + ") microseconds average.");
 }
 private static synchronized long getSync()
 {
 return System.currentTimeMillis() % 100L;
 }
 private static long getUnSynch()
 {
 return System.currentTimeMillis() % 100L;
 }
}

When I ran it, this was my output:
Sync took (98345) ms., or (0.098345000) microseconds average.
Unsync took (43530) ms., or (0.04353000) microseconds average.


The difference here is 50 or so nanoseconds per access. Unless you are writing a high-performance application like a 3D game or database, I just don't see how this can even be worth discussing. Almost every application I write would involve calling these accessors on creation, which means 40 or 50 times during an entire application life, or even if it is tied to user requests like in a service or web application, 50 microseconds per request will be dwarfed by the amount of time the request itself takes. If there is a database access, just opening a connection that takes 1 millisecond to open will be 20,000 times slower than a synchronized accessor.

As engineers we sometimes like to focus on the really fun stuff, like generated bytecode or network packets or even CPU states, but synchronization overhead doesn't even seem like it's worth the time to type out.

PS: Have I said how much I hate the blogger editor???

Friday, April 24, 2015

Star(schema) Commander, Iteration 4

So, thus far I have added groupings, aggregates and degenerate dimensions to the core, and I have added references to JxBrowser which added 200MB to the project. :(

The UI for the data explorer is challenging.  I don't want to spend the rest of my life on it but I also don't want to produce an ugly piece of crap reminiscent of old Linux Motif/Tcl/Tk apps.  The Java UI is much more limiting than I remembered - most new gestures aren't accounted for, even in the newest stuff.  Maybe JavaFX?

Sunday, March 29, 2015

Star(schema) Commander, Iteration 3

So much for micro releases.  :|

So far I have experienced a few interesting things:

  • Micro releases are not that satisfying when doing initial development - especially across three interconnected projects - core, cli and gui libraries.
  • Unit testing user interfaces really sucks.  I tried to find something but no solution seems to be around for very long.  In the end I created my own UI framework based on standalone components and messaging which made testing very easy and meant the runtime app just had to wire them together properly.  It also achieved a nice encapsulation of the different UI parts.
  • It is WAY too easy to get stuck testing every detail in a UI and it can eat up a LOT of time because everything you have to do is a new one-off.
  • The internet is not very helpful on the issue of user interface testing.  If I have to read another lecture about separation of responsibility and how you should test the domain layer, not the UI objects, blah blah barf, I will delete the internet.  Guess what, people? User interfaces have requirements as well and those must be tested with the same level of reliability as other parts.
  • When testing a user interface, there is a lot you have to let go.  Unit testing Java code can be very granular and specific, but if you try that in a user interface it will be a train-wreck.  E.G., if a table has headers that read "Name", and "Schema", testing those is going to add more semi-useless tests and will make any change to the UI break way too many tests.  Besides which I would like to refactor my UI (changing lists to tables, etc) without extra tests breaking.
  • I did manage to temper my desire to create the perfect user interface library.  I simply created what I needed and refactored as I realized that I needed more functionality.  There is a word for that but it escapes me at the moment (and most other times) ;)  . . .
  • I am toying with the idea of rewriting my UI using JxBrowser - a Java library for using an embedded Chromium container.  My Java desktop apps look so crappy and take so much work to wire together and are therefore resistant to change, and HTML makes a nice container for UI code - as compared to Java code.  I am concerned about the extreme commercial nature of the project - $1600 per license minimum; I don't know what they think they are providing that is worth that price - even if I do have a free open source license at my disposal.