Bio and Publications

Thursday, February 27, 2014

Django and REST -- Tastypie vs. Django REST

Ouch. What a difficult question.

This isn't easy.

Comparing http://django-tastypie.readthedocs.org/en/latest/ with http://www.django-rest-framework.org is hard. They're both outstanding projects with a long history.

Trivial Follow-up Question 1: What are the requirements?

I happen to know, however, a bit about the context, so I suspect that the requirements center around super-flexible data access and numerous serialization formats.

History

My initial reaction is "Django-REST" of course. Mostly because I started with this several years ago and spent some time tweaking and adjusting my local copy. Our requirements involved adapting Django (and Django-REST) to use Forge Rock Open AM for authentication.

One feature that we didn't need was a sophisticated set of built-in transactions that covered the full REST spectrum of GET, PUT, POST and DELETE. 90% of our processing was GET with an occasional POST.

The other feature we didn't need was a trivial mapping from the Django object model. Our GET processing required view functions as mediation between our database models and the "published" model available through the RESTful API.

Since we needed so little, we hacked out the essential serialization feature set to support our GET operations.

Serialization

Considering the context of the initial question, I think that serialization is the deciding factor. Comparing the serialization features seems to indicate that the following summary may be relevant.

Tastypie serialization is simpler. The support for XML, YAML, JSON, etc., is simple.

Django-REST serialization+render is quite a bit more sophisticated and more flexible. The process is explicitly decomposed into serialization (for breaking down the model objects) and rendering in some external representation like XML, JSON, YAML, etc.

This two-step breakdown in Django-REST seems to make an open data project work out nicely. The developers should find it easier to integrate and publish data from a variety of sources.

Thursday, February 20, 2014

Third Time's the Charm: the version 3.0 phenomenon

Somewhere, I have a vague recollection of reading advice from someone (Bill Gates?) that it takes three versions to get things right. The context may have been a justification of the wild success of Windows 3.0.

Or, I could be just making it up.

But one thing I have noticed is that there's a definite bias toward looking at software three times.

I worked (briefly) with an agile project management group that suggested that everything will be released three times, called the "Good", "Better", "Best" releases.

  • The good release passed the unit tests.
  • The better release included any non-functional (performance, auditability, maintainability, etc.) improvements required.
  • The best implementation possible.
Not everything required three releases. Simpler components can merge better and best. Some components simply start out in really, really good shape.

Teaching Moment

What I've also noticed is that the explanation of the component -- writing documentation, presenting to peers in a walkthrough -- leads to profound rethinking. 

May things may appear to be better or best in the sense above. Until we have to explain them. Then they're no longer "best" but merely "better" or perhaps even "good." 

A few minutes spent hand-waving through a design often points to things that aren't quite to easy to explain. A walkthrough is very beneficial to the person doing the presentation.

But, not too early.

When I made military software, we had Preliminary Design Reviews that were done before coding begins. The idea was to surround the difficult coding work with yet more process steps and yet more deliverable intermediate results. 

The intent was noble: if a walkthrough reveals so much, then do the walkthroughs early and often.

However. I'm beginning to think that early isn't ideal.

I think that the design walkthrough should be delayed until after minimally working code exists. Once there's code -- with automated unit tests -- then refactoring to meet non-functional quality factors (like performance) is easier and more likely to be successful.

Also, refactoring to make the software clear, simple, and elegant should probably wait until it works and has a complete suite of automated unit tests. 

Thursday, February 13, 2014

TCP/IP Mysteries and user support

It's not clear, actually, if this involves a TCP/IP "Mystery". What it may involve is a simple lack of ability to communicate. Or something.

I got this question:
"Request help w/ finding a reference or you can post a blog about how you can you have 2 oracle servers or for that matter any 2 servers listening in on different sockets on the same unix box."
And this background. Such as it is.
"They are going to ask, how can this work? My lame explanation is that on a unix box you can have multiple servers listening in on different ports. I tried Googling around but couldn’t find anything good."
It appears that the DBA provided a TNSNAMES.ORA. And some desktop tool user was not happy with the TNSNAMES.ORA that was provided.

The saga is long and sad.

It amounts to something like this.

DBA: Here's the TNSNAMES.ORA.
User: That didn't work.
DBA: Yes, it did.
User: No, it didn't. You're an idiot.
DBA: I know you are but what am I?

And it devolved from there into a request to help use Google to locate a tutorial on TCP/IP address and port numbers.

I'll repeat that: a request to help use Google.

Apparently, the desktop user had done something in database A and couldn't find the results in database B. And didn't understand what was going on.

And this lead to the DBA asking me to help with Google to prove that the DBA's TNSNAMES.ORA worked.

How does that help the user?



Thursday, February 6, 2014

Hacker Monthly

Check out this month's Hacker Monthly.

One of my Stackoverflow answers was reswizzled into a short article on class design.

That was gratifying.