Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Tuesday, October 31, 2017

Some Reading

Higher-Order Functions. A really cool idea. Javascript isn't my favorite language.

https://medium.freecodecamp.org/higher-order-functions-in-javascript-d9101f9cf528

This, on the other hand, is huge: trunk-based development.

https://codeburst.io/trunk-based-development-vs-git-flow-a0212a6cae64

I'm really tired of having a dev branch with periodic commits to master so we can deploy from master. It's so much nicer to tag a release and deploy that.

Here's the top 10 Python list for September 2017.

https://medium.mybridge.co/python-top-10-articles-for-the-past-month-v-sep-2017-e42c26816ab9?source=email-f2cdc4351994-1507375544705-digest.reader------0-37&sectionName=top&gi=9a6dd40e3790

Learning Python: Zero to Hero: https://medium.freecodecamp.org/learning-python-from-zero-to-hero-120ea540b567

Why we switched from Python to Go: https://codeburst.io/why-we-switched-from-python-to-go-60c8fd2cb9a9. We distribute a CLI for our API in Go because it's slightly simpler to provide executables in Go. However, our user community has Python already... We need to provide similar functionality in Python.

Technical Debt: https://medium.freecodecamp.org/what-is-technical-debt-and-why-do-most-startups-have-it-9a54458daabf

Software Engineering v. Programming. https://medium.com/@samerbuna/software-engineering-is-different-from-programming-b108c135af26

Annotations in Java? Ugh. The level of complexity seems to have gotten out of control. https://blog.softwaremill.com/the-case-against-annotations-4b2fb170ed67

Programming vs. the text of the code itself. https://medium.com/@karolismasiulis/programming-is-not-about-text-c205ba6aa3ba I'm don't by the reductionist 1-dimensional view of text. Yes -- in a narrow, technical sense, it's one-dimensional. If you want to be really reductionist, it's a stream of bytes, which are really just a base-256 number. I don't think the reductionist argument is helpful. However, I am sure that the declarative/imperative distinction is worthy of a lot of thought, and this is a nice comparison. (In spite of the javascript.)

Tuesday, October 24, 2017

Programming by Search, Copy, and Paste Leads to Epic Fail

In a way, this is about an epic fail attempting copy-and-paste coding. But really, this is about thinking outside the box. The issue -- to me -- comes from failing to see the box. Here's the body of the email, edited slightly.
"...how determine when a file has completed downloading. It would be helpful if code snippets in a unix shell and Python. 
"I did Google but none seemed to address the fundamental race conditions. They all involve a variant of try, sleep and try again. This is problematic for my particular case because the file sizes very significantly."
I'll ignore the grammar problems and focus on the intent of the "I did Google..." part. Based on some personal knowledge, I doubt there was more than a single search string tried. And I doubt that more than a single page of the response was looked at. Those are not important concerns.

The important concern is the shocking vagueness of the problem statement. These words are almost entirely meaningless:
"a file has completed downloading"
Imagine the variety of possible file transfer protocols that could be involved, and how many of them can be properly scripted. Take all the time you want. It can help to make a list of all the protocols that make this is a non-problem.

No protocol was named. Therefore, a protocol was assumed. And the presence of this kind of tacit assumption forms an implicit box restricting what they're doing. The restriction is so unyielding to them than they don't even need to mention it. It's as essential to them as air. They need it, but cannot see it, and refused to acknowledge it.

At this point, all we can do is make random guesses.

("Why didn't you ask them for clarification?" you ask.  Good point. It's a personal failure in this case. The back-and-forth would take days. Eventually, they would send me useless explanations of deep ineptitude or a need to engage in corporate politics. Or both. I'll admit that I'm a jerk about requiring folks to take a first step and make a stab at code. Without code, I find it largely impossible to determine what they're really talking about. The above question is a prime example of a disconnection from reality that's too exasperating to deal with except superficially.)

Identifying the Box

Guess #1. This may be about FTP (or SFTP) file transfers. Further, it may involve FTP file uploads to a server, where the client doesn't disclose a size. Yes, the word "downloading" seems to preclude this guess, but almost all other choices aren't even possible.

If it really was a client side download, this is trivially automated using any of the available FTP client programs, include wget, curl, sftp, etc. The Python ftplib seems to be a fully automated client for FTP. The documentation is packed with examples. It seems unlikely that the question is actually client-side.

It's also possible that a single search failed to reveal all these automatable FTP clients.

Guess #2. "determine when"? Who actually cares when the upload finishes? An upload matters to the next client doing a download, or -- perhaps -- to a process that's supposed to consume the uploaded file. Is that what this is about?

Is the real question "how to trigger processing of an uploaded file when using FTP?"

In this case, we're left with stacks of follow-up questions. Primarily: "Why are you using FTP?"

If they replace their silly FTP (or SFTP) server with a RESTful API, they won't have these problems. It takes a few days to write a secure file-upload Flask container. With a swagger spec. And unit tests. And Gherkin feature definitions, and a behave test suite to be sure it *really* works.  It doesn't need very many routes. On completion of upload, it can fork off subprocesses to process the uploaded files. This is not hard. Really. Flask + Celery will do this.

Understanding the problem seems to require stepping outside of some box. It appears this is a struggle because of a poorly-defined box: a box assumed without being stated.

Working With the Box

At this point, we can only pretend the problem is about triggering processing after an upload. Let's further pretend the FTP is a legal requirement. Or we can pretend that SFTP is imposed by an inept IT department who also loves living inside some poorly-defined box. We're stuck with FTP for inexplicable reasons.

What can we do to game an FTP server to trigger processing of files of unknown sizes?

  • Write our own FTP server. This isn't very hard. It is, however, far simpler to write a RESTful Flask service that handles the file upload as a POST request via curl or wget. Writing an FTP server's a pain in the ass because the FTP protocol is surprisingly complex. Even writing an FTP subset that serves very specific client needs can be painful.
  • Poll the upload directory. This implies a race condition. Polling (and the race condition) have no practical consequences. If you want "real-time", write a RESTful API and don't use FTP. Since you're insisting on FTP, a delay is going to be part of the solution.
I'm more than a little shocked that search was considered as a viable design strategy to solving this problem. It doesn't seem like searching for solutions is required at all. I'm probably overstating this, but it seems sort of trivial and obvious that either a second file is required or a better file protocol is required. This seems to be simple "thinking" not "googling."

There are bunches of ways to approach this. Here are a few ways to use a second file and some kind of naming convention to show that two files are part of one transfer.
  • Send a file with the size of the target file *before* the target file. When the target file matches the stated size, initiate processing.
  • Send a file with the size and MD5 checksum of the target file. etc.
  • Send a file *after* the target file with the size and checksum. When this file shows up, simply confirm that the first file is all there.
Yes, polling is required. However, there's no race condition: there are two separate conditions which must both be met. The files are provided serially, the conditions are met serially.

Here are a two approaches that use a file format that properly handles completeness.
  • Gzip the file. The file receipt polling loop repeatedly tries to unzip it. If the unzip fails, the file is incomplete. 
    • Don't want to spend too much CPU time? Wait until the size has been stable for two polling intervals and then try to unzip then.
  • Tar the file. Yes. A tar archive of a single file can be checked for integrity. When the archive can be checked and shown to be valid, the target element can be extracted and processed.
    • Don't want to spend CPU time validating? Again. Wait for a stable size for a few polling intervals.
And, of course, it's possible to invent an entirely home-brewed file-wrapping protocol. Here's an approach.
  • Wrap the content in MIME-style headers. These can provide a size or a terminator string to help identify the end of the transfer.
The point here is that googling for code isn't part of solving this problem. Indeed, it can't solve this problem. Merely thinking about the nature of the problem ("triggering processing", "knowing the size") seemed necessary and sufficient to frame a solution.

What's Essential

Here's what didn't happen:
  • State the actual problem. 
  • Identify the boxes. Write them down. In words. There may be more than one.
  • Locate code to work with the boxes. Find the libraries or packages. Install them. Write a hello world. example to be sure that the code is understood.
Then -- and only then -- can we start to imagine solutions and ask questions about the boxes or the code that might manage the boxes.

It's impossible to state this strongly enough: We can't think outside the box if we refuse to acknowledge the box.

Tuesday, October 17, 2017

Why I like Functional Composition

After spending years developing a level of mastery over Object Oriented Design Patterns, I'm having a lot of fun understanding Functional Design Patterns.

The OO Design Patterns are helpful because they're concrete expressions of the S. O. L. I. D. design principles. Much of the "Gang of Four" book demonstrates the Interface Segregation, Dependency Injection, and Liskov Substitution Principles nicely. They point the way for implementing the Open/Closed and the Single Responsibility Principles.

For Functional Programming, there are some equivalent ideas, with distinct implementation techniques. The basic I, D, L, and S principles apply, but have a different look in a functional programming context. The Open/Closed principle takes on a radically different look, because it turns into an exercise in Functional Composition.

I'm building an Arduino device that collects GPS data. (The context for this device is the subject of many posts coming in the future.)

GPS devices generally follow the NMEA 0183 protocol, and transmit their data as sentences with various kinds of formats. In particular, the GPRMC and GPVTG sentences contain speed over ground   (SOG) data.

I've been collecting data in my apartment. And it's odd-looking. I've also collected data on my boat, and it doesn't seem to look quite so odd. Here's the analysis I used to make a more concrete conclusion.

def sog_study(source_path = Path("gps_data_gsa.csv")):
    with source_path.open() as source_file:
        rdr = csv.DictReader(source_file)
        sog_seq = list(map(float, filter(None, (row['SOG'] for row in rdr))))
        print("max {}\tMean {}\tStdev {}".format(
            max(sog_seq), statistics.mean(sog_seq), statistics.stdev(sog_seq)))


This is a small example of functional composition to build a sequence of SOG reports for analysis.

This code opens a CSV file with data extracted from the Arduino. There was some reformatting and normalizing done in a separate process: this resulted in a file in a format suitable for the processing shown above.

The compositional part of this is the list(map(float, filter(None, generator))) processing.

The (row['SOG'] for row in rdr) generator can iterate over all values from the SOG column. The filter(None, generator) will drop all None objects from the results, assuring that irrelevant sentences are ignored.

Given an iterable that can produce SOG values, the map(float, iterable) will convert the input strings into useful numbers. The surrounding list() creates a concrete list object to support summary statistics computations.

I'm really delighted with this kind of short, focused functional programming.

"But wait," you say. "How is that anything like the SOLID OO design?"

Remember to drop the OO notions. This is functional composition, not object composition.

ISP: The built-in functions all have well-segregated interfaces. Each one does a small, isolated job.

LSP: The concept of an iterable supports the Liskov Substitution Principle: it's easy to insert additional or different processing as long as we define functions that accept iterables as an argument and yield their values or return an iterable result.

For example.

def sog_gen(csv_reader):
    for row in csv_reader:
        yield row['SOG']

We've expanded the generator expression, (row['SOG'] for row in rdr), into a function. We can now use sog_gen(rdr) instead of the generator expression. The interfaces are the same, and the two expressions enjoy Liskov Substitution.

To be really precise, annotation with type hints can clarify this.  Something like sog_gen(rdr: Iterable[Dict[str, str]]) -> Iterable[str] would clarify this.

DIP: If we want to break this down into separate assignment statements, we can see how a different function can easily be injected into the processing pipeline. We could define a higher-order function that accepted functions like sog_gen, float, statistics.mean, etc., and then created the composite expression.

OCP: Each of the component functions is closed to modification but open to extension. We might want to do something like this: map_float = lambda source: map(float, source). The map_float() function extends map() to include a float operation. We might even want to write something like this.  map_float = lambda xform, source: map(xform, map(float, source)). This would look more like map(), with a float operation provided automatically.

SRP: Each of the built-in functions does one thing. The overall composition builds a complex operation from simple pieces.

The composite operation has two features which are likely to change: the column name and the transformation function. Perhaps we might rename the column from 'SOG' to 'sog'; perhaps we might use decimal() instead of float(). There are a number of less-likely changes. There might be a more complex filter rule, or perhaps a more complex transformation before computing the statistical summary.  These changes would lead to a different composition of the similar underlying pieces.

Tuesday, October 10, 2017

Python Exercises

https://www.ynonperek.com/2017/09/21/python-exercises/amp/

This seems very cool. These look like some pretty cool problems. It includes debugging and unit testing, so there's a lot of core skills covered by these exercises.