OGSA-DAI blog ported

SourceForge are retiring their “hosted apps”, starting with WordPress, . This means that projects, such as OGSA-DAI, that use “hosted apps” such as TRAC or WordPress now have to host these in their own SourceForge project areas, rather than them being centrally managed.

I’ve followed their instructions on Migrating WordPress from Hosted Apps to Project Web, and this is the result. I think everything’s here but if you’re an existing user and can’t log in, please let me know. Also, be warned that many of the links within the blog to other blog articles are hard-coded to the old blog URL format of http://sourceforge.net/apps/trac/ogsa-dai rather than the new one of http://ogsa-dai.sourceforge.net/blog. Finally, there are no images, as there are problems in getting uploaded files from the SourceForge backs which a few users have experienced (I’ve raised a ticket 26284)

TRAC’ll be the next to be moved…oh what joy!

–mike

ODW: …(thought I’d) something more to say

This post is something of a companion piece to our project manager, Rob Baxter’s post “The time is gone, the song is over“, where I give some final thoughts from my point-of-view as lead developer.

Firstly, I must say that this was a very enjoyable project and I’m sad that it has come to an end — there is still a lot I’d like to do and I firmly believe the Workbench has the potential to become a useful tool to a lot of developers in data-oriented projects.

The usability reviews by our resident usability expert Mike Jackson formed the backbone of the project, providing us with both a focus and an objective way of measuring and evaluating progress. Whilst similar could be said for the code reviews, the usability reviews provided a much more visible sense of progress that had a immediate impact. Despite this, I believe the code review was essential in revealing the extent of the technical issues in the project that need to be tackled in the long-term to have a truly maintainable and robust piece of software.

This was the first project that I’ve worked on where we were expected to write a regular blog. I’m quite a keen reader of software blogs, so I relished the chance to add a few words to the general blogosphere at the same time as keeping everyone up-to-date with progress on the project. I took the opportunity to write some posts on generic software development issues (such as On Code Quality) as well as providing specific advice to developers using certain technologies (such as this post on Graphiti) in addition to project updates. The more generic posts I submitted to news sites such as Reddit, which resulted in quite a few readers and comments. It pays to have a thick skin if you go down this road — most blog posts will attract more criticism than praise! Whether or not you go the news site  route, I strongly recommend the blog approach to other teams as it is an excellent way of attracting new visitors to your site, making progress visible and keeping new and existing users up-to-date with news. The major disadvantage is that a largish blog-post will take most of a day to write, especially if you take the wise route of getting feedback and re-drafting before publishing. A nice alternative to a blog post is a short screencast, which if you have the right software (we used camtasia) is quick to produce and appealing to visitors. I still hope to find time to produce a final screencast showing the current workbench.

Picture by Jeff Kubina

My post On Code Quality arguably set myself standard that I was unable to live up to. Whilst I was aware that I wouldn’t have time to fix a lot of the existing issues, even entirely new parts of the workbench failed to live up to my guidelines in some respects, most notably in testing. Testing was a particularly thorny issue for the project as it is largely GUI based. Normally I would aim to follow an MVC (or similiar, e.g. MVP) approach and just test the model, relying on manual testing for the strictly GUI parts. This proved difficult with the Graphiti library used to build the new Graphical editor and I found the best approach was to use SWTBot to drive the testing. However, even with this tool, the process was difficult and frustrating. The maxim that unit testing should save you time was certainly not working here – it took much longer to figure out how to write tests for the code than it did to write the code itself, which forced me to the decision to forgo a lot of testing in return for continued progress on tracker items. Regardless of this, I am convinced that the software is more maintainable than it was at the start of the project (the move to Graphiti was a significant gain here).

Given more time to spend on the project, I would focus on creating more documentation for users, improving the usability of the various visualisers and trying to make the codebase considerably more maintainable.

Finally, the main goal of the project — the creation of the VM — offers a great way for users to easily investigate and play with the OGSA-DAI technology (even if it does way in at a hefty 2.2GB download!). Other projects with complicated or distributed architectures may well want to consider a similar approach.

Thanks to Rob, Mike and JISC for a great project!

Adrian Mouat

The time is gone, the song is over…

Well, the project is over. How’d we do?

Firstly, a precis: where did we start?  We had a visual editor for OGSA-DAI; we knew it had usability issues; we knew there were some internal stability problems.  Our goals were:

  1. improve the usability of the tool for researchers and developers looking to build data-intensive workflows using OGSA-DAI;
  2. improve the underlying stability, provided this didn’t interfere with improving usability; and,
  3. maintain the link between visual workflow and underlying DISPEL data-processing language.

In summary, we did what we set out to do :-) .  The Workbench is now more usable, more stable and the power of DISPEL is still there under the hood for those who want to explore.

How do we know it’s more usable?  Well, we can calculate a reasonable metric from counting the number of issues in Mike Jackson’s initial evaluation report and comparing with the final evaluation report.  This gives:

Initial Evaluation Final Evaluation
High Priority Points 36 10
Low Priority Points 69 13

which has got to be good!  Credit to Adrian Mouat for making great inroads here. You can find the detailed points recorded on the project tracker.

As it happened, stability did take more of a back seat to usability than we (well, I) had expected, but Adrian did make one hefty improvement: moving from Eclipse’s GMF framework to the newer Graphiti.  Not only is Graphiti easier to work with and more current, it helped do away with some of the evils of generated code!

The focus on usability over stability was a change to the original project idea, but it was the right one.  We let the usability evaluation reviews drive the development tasks, and I reckon we’ve finished up with a better product as a result.

The Workbench code is, of course, available here at SourceForge, and there’s now a fully-contained Workbench-in-a-box VMware virtual machine image for those wanting to try it out.  This is something we now plan to use to publicise both the project and the OGSA-DAI/DISPEL approach to data-intensive computing.  Its first public outing could well be Globus World 2012 at Argonne National Lab real soon.

So, in summary, it’s been a great project which has really enabled us to take a useful e-research tool to the next level.  My thanks to Adrian and Mike, and to JISC for funding us!

Using cURL with OGSA-DAI 4.2 REST

OGSA-DAI 4.2 was released last week. This was our first release with a RESTful presentation layer, built using Jersey, and in contrast to our previous releases to date which have featured web services presentation layers.

Following on from a play with cURL last year to interact via HTTP with online resources, mainly from an RDF perspective, I decided to have a go at using cURL with OGSA-DAI. Following the OGSA-DAI RESTful API I managed to use cURL to create an OGSA-DAI data source, populate this with the results of the query of a relational database and get back my data – all using HTTP GET, POST and DELETE commands and requiring the installation of no OGSA-DAI-specific components client-side whatsoever :-) A transcript of my experiences is available on our wiki.

ODW: Activate Direct Edit on Double-click in Graphiti

I thought I’d write a quick blog that might help anyone using Eclipse’s Graphiti framework and want to have direct edit activate on double-click rather than the slightly awkward single click method that is used by default. It turned out to be really quite easy, but there is a small quirk, so I thought it was worth documenting here.

The first thing you need to do is set up a double click behaviour provider as per the tutorial Providing Double Click Behavior. The slightly tricky bit is figuring out how to call the direct edit feature on the double clicked element. This can be accomplished with code similar to the following:

IDirectEditingInfo directEditingInfo =
    getFeatureProvider().getDirectEditingInfo();
directEditingInfo.setMainPictogramElement(pe);
directEditingInfo.setPictogramElement(textShape);
directEditingInfo.setGraphicsAlgorithm(
    textShape.getGraphicsAlgorithm());
directEditingInfo.setActive(true);
getDiagramEditor().refresh();

Where pe is the PictogramElement containing the text to edit and textShape is the AbstractText element itself – which you get via the ICustomContext object. The IDirectEditingInfo interface doesn’t have the greatest documentation, so it’s quite possible what I’ve done here isn’t entirely correct, but it does seem to work. The real quirk is the need to call refresh for anything to happen, hopefully this will change in the future. (I’m also not clear why I need to set both the GraphicsAlgorithm and the PictogramElement, but there could be a good reason for this).

– Adrian.

Usability evaluation review complete

I have now completed my second usability evaluation of the workbench and written up my findings here. The second version has improved in terms of usability in a number of ways:

  • Graph nodes (processing elements and literals) are visually less cluttered with internal lines and look more appealing (they are consistent with how OGSA-DAI activities are often presented in presentations).
  • Creating processing element nodes is now far easier, the user able to drag and drop from a list of processing elements available in the registry.
  • The user can now use the Run button to submit their DISPEL workflow to the gateway which saves a lot of time.
  • In theory (see below), there is no need for the user to edit or even look at the DISPEL script at all.
  • Apart from a couple of places, I didn’t need any supporting user doc at all (though whether this would be the case from someone completely new to the workbench would be worth checking)

The major outstanding issue is that it is still easy for the DISPEL graph to get out of synch with the underlying DISPEL script and even if the graph represents a valid workflow for the underlying script to be an uncompilable mess.

As for my initial evaluation, I’ve grouped my recommendations according to various activities a user can undertake when using the workbench and each recommendation cites the appropriate heuristic evaluation guidelines. I’ve also classified these according to a simple low or high priority.
—Mike

ODW: Ten Hints for Testing Eclipse Plug-ins

Testing Eclipse plug-ins isn’t the straightforward walk-in-the-park you might be hoping for. For me it’s been a marathon in a maze full of dead-ends. In the hope that I can spare you some of my pain, here are ten hints for anyone embarking on a similar journey:

  1. As you probably know, you can use “Run As” to run code as a “JUnit Plug-in Test”. This creates a new workbench instance for running the tests, which means you can use the Eclipse Platform API within your tests. You have a large degree of control over the launched workbench, including options to control exactly which plug-ins are in the workbench or to run headless. (This run method  is frequently referred to as PDE JUnit Testing if you want to search for further resources).
  2. Further to the idea of controlling the test workbench, be aware of the concept of Target Platform – this allows you set up a workbench for compiling and testing against that is significantly different to your host workbench.
  3. Put any PDE JUnit test code in a completely separate plug-in. This sounds like a pain as it creates yet another project in your workspace, but it is necessary to avoid your released plug-in being dependent on JUnit and/or other test resources. (You might find that you now need to export extra packages from the plug-in being tested before they can be used in the tests. I don’t think there is a way around this.)
  4. If you can separate your plug-in code into separate model and view plug-ins (or perhaps a jar and a plug-in), seriously consider doing so. This means you can keep your model tests completely separate from your view tests so that changes to ephemeral things like dialogue boxes won’t break them.
  5. You might find that you can use mocking libraries to isolate your code and avoid depending on components which require user interaction. Or you may well find this is far more trouble than it’s worth. (I started down this path but gave up when a supertype asked for user-confirmation with no way to turn it off).
  6. PDE JUnit tests are naturally pretty slow, as they need to start and stop Eclipse. This a major problem as it decreases the frequency with which you will be willing to run them and increases the time needed to write them. (I think you could even argue that they are no longer truly “unit” tests because of this). You might be able to mitigate this by using a continuous test runner which will regularly run the tests in the background for you.

    Not the sort of plug-in test I'd recommend

    Not the sort of plug-in test I'd recommend. Image courtesy of artlebedev.com.

  7. You might find SWTBot is a lifesaver. It essentially provides various methods for locating UI elements and exercising them. For example, you might ask it to find the dialog box which says “Confirm Delete” and click the button titled “Yes”. There are also controls for waiting on various conditions and taking screenshots of failures. SWTGefBot also exists which has special features for getting GEF elements. Be warned however that SWTBot has a typical lack of documentation and several outstanding bugs. (I just noticed a similar UI testing tool called WindowTester Pro which looks like it might be a better choice).
  8. Be aware of whether or not your tests run in the UI thread. SWTBot tests never do, but PDE JUnit tests will by default. If you are not in the UI thread, you will have to use the syncExec or asyncExec methods (either from the Eclipse Display class or the SWTBot UIThreadRunnable class) to run any code that calls the PlatformUI in Eclipse. If you do run in the UI thread, be aware that your test will hang if the editor waits for a user response.
  9. Documentation on how to do plug-in testing is sparse. Some of the best resources are the tests for existing plug-ins – I’ve been looking at the tests for Graphiti, which use a range of techniques including mocking and SWTBot. If you’re using SWTBot, you will probably need to look at the source code to figure out how to use it in some cases.
  10. Have a look at buckminster, a framework for automating building, testing and deploying Eclipse-based software solutions. It looks like it supports PDE JUnit Tests and SWTBot and could be potentially used for setting up a continuous integration style build/check-in process. I only recently came across this, so I haven’t had time to try it out.

Hope this list is of some help!

–Adrian.

ODW: Progress Update (28 Oct)

Time for another quick update.

Since the last blog-post there has been a reasonable amount of progress*, but mainly in small improvements rather than ground shaking new features. These improvements include the following changes to the Visual Editor:

  • Supporting creation of new Processing Elements via drag and drop from Registry Client (this is quite cool!)
  • Automatic layouting of DISPEL documents
  • Streamlining the process of creating new Process Elements
  • Various bug-fixes

And some more generic changes:

  • Creation/addition of icons
  • Added a button to the toolbar to submit the workflow
  • Automatic adding of submit statement if none present (this is a big benefit for Visual Editor users as they no longer need to change to the Text Editor to add this statement)

Several of these changes were in response to some very brief feedback from our resident usability expert, Mike. The plan is to do a more thorough review of the current workbench next week, with a focus on the new Visual Editor.

My current focus is on creating a decent test-suite for the Visual Editor and creating a new VM for Mike to test. The test-suite has to be a major goal if the project is to have a maintainable future. I have to admit to putting this off due to uncertainty about how to go about it – it’s not a trivial component to test due to all the dependencies. However, I’m starting to get to the bottom of what’s required now and hopefully my next blog will be on this topic.

–Adrian.

* (Progress was slowed slightly as I took few days off to run the Amsterdam half marathon – you can find some pictures that would put anyone off this idea at marathon-photos.com).

ODW: Progress Update (12 Oct)

Another quick post to keep everyone up-to-date with progress on the project.

Although I’m very happy with the move to Graphiti, it did take up some time we hadn’t budgeted for. The immediate benefit is that I can now see how a lot of remaining problems can be solved, whereas I would have had to experiment and swear a lot to solve them with GMF. We also managed to sidestep or solve some issues/bugs when doing the re-write.

I really want to set some time aside to look at refactoring the code I’ve just written and increasing the number of tests, so that it is a truly maintainable solution going forwards. Arguably, I should have been adding in tests from the start but it’s difficult when coming to grips with a new platform, especially when writing GUI heavy code. I suspect the tests will need to use mock objects extensively, which is a technique I’ve been meaning to explore for some time but have very limited experience with so far.

All of this, plus some sick leave, has meant we aren’t exactly where we should be according to the Gantt chart. One of the most pressing tasks is to produce another version for usability review. This second review is even more important given the move to Graphiti, which is likely to have created a different set of usability issues. Despite this, I’m pretty upbeat about what we can manage to do in the time left – I can see how to solve a lot of the remaining issues and should be able to start ploughing through them.

– Adrian.