Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Tuesday, April 21, 2020

Why Python is not the programming language of the future -- a response

See https://towardsdatascience.com/why-python-is-not-the-programming-language-of-the-future-30ddc5339b66.

This is an interesting article with some important points. And. It has some points that I disagree with.

  • Speed. This is a narrow perspective. numpy and pandas are fast, dask is fast. A great many Python ecosystem packages are fast. This complaint seems to be unsupported by evidence.
  • Dynamic Scoping Rules. This actually isn't the problem. The problem is something about not being able to change containing scopes. First, I'm not sure changing nesting scopes is of any value at all. Second, the complaint ignores the global and nonlocal statements. The vague "leads to a lot of confusion" seems unsupported by any evidence. 
  • Lambdas. The distinction between expressions and statements isn't really a distinction in Python in general, only in  the bodies of lambdas. I'm not sure what the real problem is, since a lambda with statements seems like a syntactic nightmare better solved with an ordinary, named function.
  • Whitespace. Sigh. I've worked with many people who get the whitespace right but the {}'s wrong in C++. The code looks great but doesn't work. Python gets it right. The code looks great and works.
  • Mobile App Platform. See https://beeware.org
  • Runtime Errors. "coding error manifests itself at runtime" seems to be the problem. I'm not sure what this means, because lots of programming languages have run-time problems. Here's the quote: "This leads to poor performance, time consumption, and the need for a lot of tests. Like, a lot of tests." Performance? See above. Use numpy. Or Cuda. Time consumption? Not sure what this means. A lot of tests? Yes. Software requires tests. I'm not sure that a compiled language like Rust, Go, or Julia require fewer tests. Indeed, I think the testing is essentially equivalent.
I'm interested in ways Python could be better. 

Tuesday, April 14, 2020

The COBOL-to-SomeBetterLang Translator

Here's a popular idea.
... a COBOL-to-X translator, where X is a more-modern programming language ...
This is a noble aspiration.

In principle -- down deep -- all programming can be reduced to an idealized Turing Machine.

This means that we *should* be able to locate all the state changes in a given spaghetti-bowl of COBOL. Given the abstract state transitions, we can emit a version of that machine in any language.

Emphasis on the *should*.

There are road-blocks.

The first two are rarities. But. When confronted with these, we'll have significant problems.

  • The ALTER statement means the code can be changed at run-time. There are constraints, but still... When the code is not static, the possible domain of state changes moves outside working storage and into the procedure division itself.
  • A data structure with a RENAMES clause. This adds a layer of alternative naming, making the data states quite a bit more complex.
The next one is a huge complication: the GOTO statement. This makes state transitions extremely difficult to analyze. It's possible to untangle any GOTO of arbitrary complexity into properly tested IF and WHILE statements. 

However. The tangle of GOTO's may have been actually meaningful. It may have carried some suggestion of a business owner's intent. A COBOL elimination algorithm may turn tangled code into opaque code. (It's also possible that it clarifies age-old bad programming.)

The ordinary REDEFINES clause. This was heavily used as a storage optimization for the tiny, slow file systems we had back in the olden days. It's a union of distinct types. And. It's a "free" union. We do not know how to distinguish the various types that are being redefined. It's intimately tied to processing logic in the procedure division.

Just to make it even more horrifying...

File layouts evolve over time. It's entirely possible for a *working*, *valid*, *in-production* file to have content that does not match any working program's DDE. The data has flags or indicators or something that lets the app glide past the bad data. But the data is bad. It used to be good. Then something changed, and now it's almost uninterpretable. But the apps work because there are enough paths through the logic to make the row "work" without it matching any file layout in any obvious way.

I'm not sure an automated translation from COBOL is of any value. 

I think it's far better to start with file layouts, review the code, and then write new code from scratch in a modern language. This manual rewrite leads directly to small programs that -- in a modern language -- are little more than class definitions. In some cases, each legacy COBOL app would like becomes a Python module.

Given snapshots of legacy files, the Python can be tested to be sure it does the same things. The processing is not nuanced, or tricky, or even particularly opaque.

The biggest problem is the knowledge captured in COBOL code tends to be disorganized. The real work is disentangling it. A language that supports ruthless refactoring will be helpful.

Tuesday, April 7, 2020

Why Isn't COBOL Dead? Or Why Didn't It Evolve?

Here's part of the question:
Why didn't COBOL evolve more successfully?
FORTRAN, OTOH, has survived precisely because it--and more importantly, related tools, esp compilers--has evolved to solve/overcome many (certainly not all!) of the sorts of pain-points you describe, while retaining the significant performance edge that (IMHO, ICBW) prevents challengers (e.g., Python) from dislodging it for tasks like (e.g.) running dynamical models (esp weather forecasting).
In short, why is FORTRAN still OK? Why is COBOL not still OK?

Actually, I'd venture to say the stories of these languages are essentially identical. They're both used because they have significant legacy implementations.

There's a distinction, that I think might be relevant to the "revulsion factor."

Folks don't find Fortran quite so revolting because it's sequestered into libraries where we don't really have to look at it. It's often wrapped into SciPy. The GCC compiler system handles it and we're happy.

COBOL, however, isn't sequestered into libraries with tidy Python wrappers and Conda installers. COBOL is the engine of enterprise applications.

Also. COBOL is used by organizations that suffer from high amounts of technical inertia, which makes the language a kind of bellwether for the rest of the organization. The organization changes slowly (or not at all) and the language changes at an even more tectonic pace.

This is a consequence of very large organizations with regulatory advantages. Governments, for example, regulate themselves into permanence. Other highly-regulated industries like banks and insurance companies can move slowly and tolerate the stickiness of COBOL.

Also.

For a FORTRAN library function that does something useful, it's not utterly mysterious. There's often a crisp mathematical definition, and a way to test the implementation. There are no quirks.

For a COBOL program that does something required by law, there can still be absolutely opaque mysteries and combinations of features without acceptable unit test cases. This isn't for lack of trying. It's the nature of "application" vs. "subroutine."

The special case and exceptions have to live somewhere. They live in the application.

For FORTRAN, the exceptions are in the Python wrapper using numpy using FORTRAN.

For COBOL, the exceptions are in the COBOL  Somewhere.

The COBOL Problem


It's a tweet, so I know there's no room for depth here.

As it is, it's absolutely correct. Allow me to add to it.

First. Replacing COBOL with something shiny and new is more-or-less impossible. Replacing COBOL is a two-step job.

1. Replace the COBOL with something that's nearly identical but written in a new language. Python. Java. Scala. Whatevs. Language doesn't matter. What matters is the hugeness of this leap.

2. Once the COBOL is gone and the mainframe powered off, then you can rebuild things yet again to create RESTful API's and put many shiny things around it.

Second. Replacing COBOL is essential. Software is a form of knowledge capture. If the language (and tools) have become opaque, then the job of knowledge capture has failed. Languages drift. The audience is in a constant state of flux. New translations are required.

Let's talk about the "Nearly Identical But In A New Language."

Nearly Identical

COBOL code has two large issues in general
  • Data. The file layouts are very hard to work with. I know a lot about this. 
  • Processing. The code has crap implementations of common data structures. I know. I wrote some. There's more, we'll get to it.
We have -- for the most part -- two kinds of COBOL code in common use.
  • Batch processing. Once upon a time, we called it "Programming in the Large." The Z/OS Job Control Language (JCL) was a kind of shell script or AWS Step Function state transition map among applications. This isn't easy to deal with because the overall data flow is not a simple Directed Acyclic Graph (DAG.) It has cycles and state changes.
  • Interactive (once called "on-line") processing. We called it OLTP: On-Line Transaction Processing. There are two common frameworks, CICS and IMS, and both are complicated.
Okay. Big Breath. What do we *DO*?

Here's the free consulting part.

You have to run the new and old side-by-side until you're sick of the errors and poor performance of the old machine.

You have to migrate incrementally, one app at a time.

It's hellishly expensive to positively determine what the COBOL really did. You can't easily do a "clean-room" conversion by writing intermediate specifications. You must read the COBOL and rewrite it into Python (or Java or Scala or whatever.)

You cannot unit test your way to success here, because you never really knew what the COBOL does/did. All you can do is extract example records and use those to build Gherkin-language acceptance tests using a template like this. GIVEN a source document WHEN the app runs THEN the output document matches the example. 

In effect, you're going to do TDD on the COBOL, replacing COBOL with Python essentially 1-for-1 until you have a test suite that passes.

Don't do this alphabetically, BTW. 

The processing graph for COBOL will include three essential design patterns for programs. "Edit" programs validate and possibly merge input files. "Update" programs will apply changes to master files or databases. "Report" programs will produce useful reports and feeds for reporting systems that involve yet more data derivation and merging.

  1. Find the updates. Convert them first. They will involve the most knowledge capture, A/K/A "Business Logic."  There will be a lot of special cases and exceptions. You will find latent bugs that have always been there.
  2. Convert the programs that produce files for the updates, working forward in the graph.
  3. The "reporting" is generally a proper DAG, and should be easier to deal with than the updates and edits. You never know, but the reporting apps are filled with redundancy. Tons of reporting programs are minor variations on each other, often built as copy-pasta from some original text and then patched haphazardly. Most of them can be replaced with a tool to emit CSV files as an interim step.
Each converted application requires two new steps injected into the COBOL batch jobs.
  • Before an update runs, the files are pushed to some place where they can be downloaded.
  • The app runs as it always had. For now.
  • After the update, the results are pushed, also.
This changes merely slow things down with file transfers. It provides fodder for parallel testing.

Then. 

Two changes are made so the job now looks like this.
  • Before an update runs, the files are pushed to some place where they can be downloaded. (No change here.)
  • Kill time polling the file location, waiting for the file to be created externally. (The old app is still around. We could run it if we wanted to.) 
  • After the update, download the results from the external location.
This file-copy-and-parallel-run dance can, of course, be optimized if you take whole streams of edit-update processing and convert them as a whole.

Yes, But, The COBOL Is Complicated

No. It's not.

It's a lot of code working around language limitations. There aren't many design patterns, and they're easy to find.
  1. Read, Validate, Write. The validation is quirky, but generally pretty easy to understand. In the long run, the whole thing is a JSONSchema document. But for now, there may be some data cleansing or transformation steps buried in here.
  2. Merged Reading. Execute the Transaction. Write. The transaction execution updates are super important. These are the state changes in object classes. They're often entangled among bad representations of data. 
  3. Cached Data. A common performance tweak is to read reference data ("Lookups") into an array. This was often hellishly complex because... well... COBOL. It was a Python dict, for the love of God, there's nothing to it. Now. Then. Well. It was tricky.
  4. Accumulators. Running totals and counts were essential for audit purposes. The updates could be hidden anywhere. Anywhere. Not part of the overall purpose, but necessary anyway.
  5. Parameter Processing. This can be quirky. Some applications had a standard dataset with parameters like the as-of-date for the processing. Some applications prompted an operator. Some had other quirky ways of handling the parameters.
The bulk of the code isn't very complex. It's quirky. But not complicated.

The absolute worst applications were summary reports with a hierarchy. We called these "control break" reports. I don't know why. Each level of the hierarchy had its own accumulators. The data had to be properly sorted. It was complicated. 

Do Not Convert these. Find any data cleansing or transformation and simply pour the data into a CSV file and let the users put it into a spreadsheet.

Right now. We have to keep the lights on. COBOL apps have to be kept operational to manage unemployment benefits through the pandemic.

But once we're out of this. We need to get rid of the COBOL.

And we need to recognize that all code expires and we need to plan for expiration.