Tuesday, December 5, 2017

Functional Requirements and Use Cases -- even for "simple" things

In the mailbag I found this nonsense, doomed to inevitable failure:
"As I get more serious about this data science stuff, it has become obvious that a windows machine is not the way to go. ...
Q1: What other things should I think about and consider while shopping for a new computer?
Q2: Are there issues w/ running VmWare and Windows 7 w/in VmWare on Ubuntu?
"
I've omitted many, many words (400 or so.)

Here are all of the functional requirements I could discern:
  • I would like to have 1 machine. I don't want a desktop and a laptop
  • Install VmWare 
  • Install Windows 7 using VmWare
This was all of the functional requirements. The other 400 words involved specifications. Nothing that approaches a use case other than singular, VMWare, and Windows under VMWare. The form factor of laptop, which seems to go without saying, might be a user story, but that's pushing it.

The "a windows machine is not the way to go" and "Install Windows 7" indicate a fairly serious problem. It is not the way to go and it's required. Both. This is doomed to inevitable failure. 

This is not the way to make a decision.

Q1. What other things should I think about? 

Just about every other thing. Start with use cases and functional requirements. Skip over specifications. (In general, never start with specifications because that's where you end: a list of useless numbers that don't bracket what you actually want to do.)

Use Cases Matter. Specifications Don't Matter.

Write down all the Mbs and Tbs you want. Without a use case, they're irrelevant noisy details. Throw the numbers away until you have a list of verbs. Things you will DO. 

With so few actual functional requirements, almost *any* computer (possibly including a Raspberry Pi 3) would pass the suite of acceptance test cases.

✅ One Machine.
✅ VMWare.
✅ Windows.

After a lot more back-and-forth, I discerned one (or maybe two) additional functional requirement(s).
  • I have leo w/ java to gen html.
I know what Leo is in this context. I'm guessing the "java to gen html" is JRst. The lack of clarity is, of course, part of the problem here.

This requirement surfaced in the context of explaining to me why Windows was so important. Really. Windows was required to run two open-source apps. And. "a windows machine is not the way to go." Doomed. To. Inevitable. Failure.

Here's the only relevant functional requirement(s): run Leo and Java. And even then, there's a huge hole in this. Leo is Python-based. Docutils RST2HTML is Python-based. Why not simply use Leo and Python?  What does Java have to do with anything?

Buy this: a Pi-top: https://www.sparkfun.com/products/13896

Q2. Are there issues w/ running...? 

Yes. Always. For everything you can possibly enumerate there are "issues". 

There. Are. Always. Issues.

Use Cases Matter.

Since you don't have any functional requirements or use cases, it's impossible to filter the issues and see if any of the known issues impact what you think you're going to do.

From what I was told, a Pi-top covers everything that's required. It's hard to be sure, of course, when the functional requirements are so vague. But there's no evidence that the Pi-top can't work to fill all of the stated functional requirements.

What To Do Next

It seems obvious, but the next step is to create a test plan. Actually, that was the first step. Since it wasn't done first, now it's the next step.

Write down the things you want to do. Make a list. Ideally a long list of things you will DO. Active voice. Verbs. Actions. Tasks. Activities. It's hard to emphasize this enough.

Then, when considering a computer, see if it can actually do those things. Test it against the requirements to see if it does what it's supposed to do. Among all the machines that pass the tests, you can then sort by price. (Or availability, or reputation, or cool stickers, whatever non-functional requirements seem relevant.)

The questions of Tb and Mb and processor clock speed mean nothing. Nothing. Find the cheapest (smallest) machine that does what you want. Don't find the machine with xMb and yTb of whatever.

There there's this, "As I get more serious about this data science stuff" which seems little more than context. But it's really important. Indeed, it's essential.

If you're going to do machine learning, you don't really want to buy the necessary computer. You want to rent it for the hour or so each day you actually need it. It will be idle 23/24 hours each day (96% of the time.) Why buy that much horsepower which you are never going to use.

If you're going to login to a server you purchased from a cloud computing vendor. Amazon AWS. Microsoft Azure, etc., then, you can probably get by with a tablet that runs SSH and a browser. A tablet with a cool keyboard and a little display rack can be very nice. https://panic.com/prompt/ and https://www.termius.com seem to be all that's required.



Without Use Cases, however, it's impossible to select a computer. Don't spend money without test cases.

Tuesday, November 14, 2017

CI/CD DevOps and Python

See https://www.slideshare.net/ITRevolution/does-sfo-2016-topo-pal-devops-at-capital-one for the 16 gates that separate a good idea from secure, productive use of software. While a lot of DevOps folks like the idea, when it comes to implementing it for Python apps, they get confused.

The confusion seems to stem from Python's lack of a proper "build" step in the CI/CD pipeline. I've had the "everything involves a build" argument and the "well setup.py is analogous to a build" arguments. I have to acquiesce to these positions as part of making progress. In this case, reasoning by analogy can be misleading.

I want to focus on the two gates that are part of the code itself, separate from the rest of the pipeline.

  • Static Analysis 
  • >80% Code Coverage (which implies Unit Tests)

Unit Testing

My preference is to run the unit test suite first and get that out of the way. If the unit test suite fails, or fails to cover 80% of the code, any other considerations are moot.

I like Git triggers based on Pull Requests (PR's) and Merge to Master for checking these two conditions. I like the idea that a PR can't be discussed until unit tests pass. They can also be part of whatever other pipeline is going on, but I like them to be done early and often.

(I worked on a sprint team where the PR unit test wasn't trusted by one of the devs: he'd carefully check out the branch and rerun the unit tests. His comments were good, so the extra effort paid off. I guess.)

After flirting with a lot of frameworks, I'm happiest with py.test. I like the py.test-coverage plug-in and the py.test-BDD plug-in.

Yes. We have acceptance tests for our features written in Gherkin. And we have pytest fixtures that are used by pytest-bdd to process the scenarios in the Gherkin feature files. It actually works out nicely because we have a cucumber.json file that makes everyone happy that we've run an acceptance test suite along with our unit test suite.

What's important is the coverage report is painless and automatic.

And it's compatible with the Ruby-based cucumber tool without involving any actual ruby.

For integration testing, we use Behave. This is a bit more cumbersome than pytest-bdd, but it's appropriate for the bigger-picture testing where we have a docker cluster and have to see a number of "Then" steps to confirm operations spread across a suite of microservices.

The goofy question that often leads to endless confusion is the relationship between unit testing and "build." The setup.py setup definition includes a `tests_require` parameter. This *should be* all that's needed to do `python setup.py test`, which *should be* all that's involved in testing.

Is it a "build"? No. But. You can tell the DevOps folks it's a build if it makes them happy.

Static Analysis

There are several kinds of static analysis. Folks who work in Java are used to having Sonar analysis performed. This is above and beyond the static analysis already performed by the compiler. It seems excessive to me, but folks deploying Java seem to like it.

For Python, there are two important static analysis tools. And this is another source of profound confusion for DevOps folks new to Python.

I like to extract the last line of the pylint output and use that numeric score as the "bottom line" on static analysis part 1. While the default setting is 9.5, that can be a challenge, and we prefer 9.0 as well as some local pylintrc modifications to modify some checkers (e.g., set line length to 120.)

For mypy, it's a little bit more complex. We're still fumbling around here.

Ideally, the type hints are all clean and mypy has nothing to say. We can, of course, fix any errors by claiming everything uses Any and returns Any and every assignment statement sets an Any value. But that's so wrong.

There are (still) modules which require typeshed stub definitions. Ideally, we'd provide these. This would be better than using Any as a hack-around. While good, it's a lot of work.

For now, I think it's sensible to have two "pass" rules for mypy: clean or typeshed error. If mypy is silent, that's perfect. If mypy can't find stubs in typeshed, we can let this go for now and log an issue from the CI/CD pipeline to note the presence of technical debt.

In the best of all worlds, we'd fork the package, fix the type hints, and put in a PR. That's a lot better than using typeshed to work around the lack of hints. 

And, of course, there's the "build" question. For mypy to work, the dependencies (or their typeshed stubs) must be present. We wind up doing a `python setup.py install` to build out the requirements. Is this a "build"? Maybe. You can tell the DevOps folks it's a build if it makes them happy.

If you want idempotent server (or container) builds, you'll need to be sure that you pin specific versions. It can help to break this into two parts:
  • a requirements.txt with specific versions 
  • a generic version-free high-level list in setup.py
The reason for this separation is to make it easy to do a `pip install` or `conda create` from the detailed requirements. Once that's out of the way, the `python setup.py` will run very quickly. If you're working with Docker containers, the `pip install` (or `conda create`) can be part of the Dockerfile, and then tests or static analysis can be run separately, after the initial wave of installations.

Tuesday, November 7, 2017

Python Type Hinting -- generally easy until you find your design flaws

Adding type hints is easy and fun. Seriously. It's not a lot of work.

Until.

Until you find a piece of code that does more than what you sort-of thought it kind-of did.

def null_aware_func(x):
    if x is None:
         return x
    return 2.2*x**1.05

This is a stab at a none-aware computation.

Let's add type hints, shall we?

def null_aware_func(x: float) -> float:
    if x is None:
        return None
    return 2.2*x**1.05

This won't fool mypy. Sigh. It passes unit tests, but it's flagged as a problem.

We have a variety of ways of define this function. And that means we need to think carefully about our None-aware design.

Is this really an @overload?

from typing import overload
@overload
def null_aware_func(x: None) -> None:
    ...
def null_aware_func(x: float) -> float:
    if x is None:
        return None
    return 2.2*x**1.05

And yes, the ... is legit Python syntax. (It's a rarely used token that forms the body of the function.)

Or is this a more advanced type?

from typing import Optional
OptFloat = Optional[float]

def null_aware_func(x: OptFloat) -> OptFloat:
    if x is None:
        return None
    return 2.2*x**1.05

I'd argue that OptFloat is a more sensible definition. However, if this is the only function that's none-aware, perhaps it's an overload.

The deeper question is one of underlying meaning. Why are we doing this? What does it mean?

And. Bonus. Will this be working in a SQLAlchemy environment, where they have their own wrappers for database objects, meaning that `is None` doesn't work and `== None` is required?

What's important is that adding type hints forced us to think about what we were doing. Unlike Java we did this without stopping progress for an extended period of "wrestling with the compiler". We can use Any temporarily because the unit tests all pass. Then, we can pay down the technical debt by fixing the type declaration.

Total. Victory.

Tuesday, October 31, 2017

Some Reading

Higher-Order Functions. A really cool idea. Javascript isn't my favorite language.

https://medium.freecodecamp.org/higher-order-functions-in-javascript-d9101f9cf528

This, on the other hand, is huge: trunk-based development.

https://codeburst.io/trunk-based-development-vs-git-flow-a0212a6cae64

I'm really tired of having a dev branch with periodic commits to master so we can deploy from master. It's so much nicer to tag a release and deploy that.

Here's the top 10 Python list for September 2017.

https://medium.mybridge.co/python-top-10-articles-for-the-past-month-v-sep-2017-e42c26816ab9?source=email-f2cdc4351994-1507375544705-digest.reader------0-37&sectionName=top&gi=9a6dd40e3790

Learning Python: Zero to Hero: https://medium.freecodecamp.org/learning-python-from-zero-to-hero-120ea540b567

Why we switched from Python to Go: https://codeburst.io/why-we-switched-from-python-to-go-60c8fd2cb9a9. We distribute a CLI for our API in Go because it's slightly simpler to provide executables in Go. However, our user community has Python already... We need to provide similar functionality in Python.

Technical Debt: https://medium.freecodecamp.org/what-is-technical-debt-and-why-do-most-startups-have-it-9a54458daabf

Software Engineering v. Programming. https://medium.com/@samerbuna/software-engineering-is-different-from-programming-b108c135af26

Annotations in Java? Ugh. The level of complexity seems to have gotten out of control. https://blog.softwaremill.com/the-case-against-annotations-4b2fb170ed67

Programming vs. the text of the code itself. https://medium.com/@karolismasiulis/programming-is-not-about-text-c205ba6aa3ba I'm don't by the reductionist 1-dimensional view of text. Yes -- in a narrow, technical sense, it's one-dimensional. If you want to be really reductionist, it's a stream of bytes, which are really just a base-256 number. I don't think the reductionist argument is helpful. However, I am sure that the declarative/imperative distinction is worthy of a lot of thought, and this is a nice comparison. (In spite of the javascript.)

Tuesday, October 24, 2017

Programming by Search, Copy, and Paste Leads to Epic Fail

In a way, this is about an epic fail attempting copy-and-paste coding. But really, this is about thinking outside the box. The issue -- to me -- comes from failing to see the box. Here's the body of the email, edited slightly.
"...how determine when a file has completed downloading. It would be helpful if code snippets in a unix shell and Python. 
"I did Google but none seemed to address the fundamental race conditions. They all involve a variant of try, sleep and try again. This is problematic for my particular case because the file sizes very significantly."
I'll ignore the grammar problems and focus on the intent of the "I did Google..." part. Based on some personal knowledge, I doubt there was more than a single search string tried. And I doubt that more than a single page of the response was looked at. Those are not important concerns.

The important concern is the shocking vagueness of the problem statement. These words are almost entirely meaningless:
"a file has completed downloading"
Imagine the variety of possible file transfer protocols that could be involved, and how many of them can be properly scripted. Take all the time you want. It can help to make a list of all the protocols that make this is a non-problem.

No protocol was named. Therefore, a protocol was assumed. And the presence of this kind of tacit assumption forms an implicit box restricting what they're doing. The restriction is so unyielding to them than they don't even need to mention it. It's as essential to them as air. They need it, but cannot see it, and refused to acknowledge it.

At this point, all we can do is make random guesses.

("Why didn't you ask them for clarification?" you ask.  Good point. It's a personal failure in this case. The back-and-forth would take days. Eventually, they would send me useless explanations of deep ineptitude or a need to engage in corporate politics. Or both. I'll admit that I'm a jerk about requiring folks to take a first step and make a stab at code. Without code, I find it largely impossible to determine what they're really talking about. The above question is a prime example of a disconnection from reality that's too exasperating to deal with except superficially.)

Identifying the Box

Guess #1. This may be about FTP (or SFTP) file transfers. Further, it may involve FTP file uploads to a server, where the client doesn't disclose a size. Yes, the word "downloading" seems to preclude this guess, but almost all other choices aren't even possible.

If it really was a client side download, this is trivially automated using any of the available FTP client programs, include wget, curl, sftp, etc. The Python ftplib seems to be a fully automated client for FTP. The documentation is packed with examples. It seems unlikely that the question is actually client-side.

It's also possible that a single search failed to reveal all these automatable FTP clients.

Guess #2. "determine when"? Who actually cares when the upload finishes? An upload matters to the next client doing a download, or -- perhaps -- to a process that's supposed to consume the uploaded file. Is that what this is about?

Is the real question "how to trigger processing of an uploaded file when using FTP?"

In this case, we're left with stacks of follow-up questions. Primarily: "Why are you using FTP?"

If they replace their silly FTP (or SFTP) server with a RESTful API, they won't have these problems. It takes a few days to write a secure file-upload Flask container. With a swagger spec. And unit tests. And Gherkin feature definitions, and a behave test suite to be sure it *really* works.  It doesn't need very many routes. On completion of upload, it can fork off subprocesses to process the uploaded files. This is not hard. Really. Flask + Celery will do this.

Understanding the problem seems to require stepping outside of some box. It appears this is a struggle because of a poorly-defined box: a box assumed without being stated.

Working With the Box

At this point, we can only pretend the problem is about triggering processing after an upload. Let's further pretend the FTP is a legal requirement. Or we can pretend that SFTP is imposed by an inept IT department who also loves living inside some poorly-defined box. We're stuck with FTP for inexplicable reasons.

What can we do to game an FTP server to trigger processing of files of unknown sizes?

  • Write our own FTP server. This isn't very hard. It is, however, far simpler to write a RESTful Flask service that handles the file upload as a POST request via curl or wget. Writing an FTP server's a pain in the ass because the FTP protocol is surprisingly complex. Even writing an FTP subset that serves very specific client needs can be painful.
  • Poll the upload directory. This implies a race condition. Polling (and the race condition) have no practical consequences. If you want "real-time", write a RESTful API and don't use FTP. Since you're insisting on FTP, a delay is going to be part of the solution.
I'm more than a little shocked that search was considered as a viable design strategy to solving this problem. It doesn't seem like searching for solutions is required at all. I'm probably overstating this, but it seems sort of trivial and obvious that either a second file is required or a better file protocol is required. This seems to be simple "thinking" not "googling."

There are bunches of ways to approach this. Here are a few ways to use a second file and some kind of naming convention to show that two files are part of one transfer.
  • Send a file with the size of the target file *before* the target file. When the target file matches the stated size, initiate processing.
  • Send a file with the size and MD5 checksum of the target file. etc.
  • Send a file *after* the target file with the size and checksum. When this file shows up, simply confirm that the first file is all there.
Yes, polling is required. However, there's no race condition: there are two separate conditions which must both be met. The files are provided serially, the conditions are met serially.

Here are a two approaches that use a file format that properly handles completeness.
  • Gzip the file. The file receipt polling loop repeatedly tries to unzip it. If the unzip fails, the file is incomplete. 
    • Don't want to spend too much CPU time? Wait until the size has been stable for two polling intervals and then try to unzip then.
  • Tar the file. Yes. A tar archive of a single file can be checked for integrity. When the archive can be checked and shown to be valid, the target element can be extracted and processed.
    • Don't want to spend CPU time validating? Again. Wait for a stable size for a few polling intervals.
And, of course, it's possible to invent an entirely home-brewed file-wrapping protocol. Here's an approach.
  • Wrap the content in MIME-style headers. These can provide a size or a terminator string to help identify the end of the transfer.
The point here is that googling for code isn't part of solving this problem. Indeed, it can't solve this problem. Merely thinking about the nature of the problem ("triggering processing", "knowing the size") seemed necessary and sufficient to frame a solution.

What's Essential

Here's what didn't happen:
  • State the actual problem. 
  • Identify the boxes. Write them down. In words. There may be more than one.
  • Locate code to work with the boxes. Find the libraries or packages. Install them. Write a hello world. example to be sure that the code is understood.
Then -- and only then -- can we start to imagine solutions and ask questions about the boxes or the code that might manage the boxes.

It's impossible to state this strongly enough: We can't think outside the box if we refuse to acknowledge the box.

Tuesday, October 17, 2017

Why I like Functional Composition

After spending years developing a level of mastery over Object Oriented Design Patterns, I'm having a lot of fun understanding Functional Design Patterns.

The OO Design Patterns are helpful because they're concrete expressions of the S. O. L. I. D. design principles. Much of the "Gang of Four" book demonstrates the Interface Segregation, Dependency Injection, and Liskov Substitution Principles nicely. They point the way for implementing the Open/Closed and the Single Responsibility Principles.

For Functional Programming, there are some equivalent ideas, with distinct implementation techniques. The basic I, D, L, and S principles apply, but have a different look in a functional programming context. The Open/Closed principle takes on a radically different look, because it turns into an exercise in Functional Composition.

I'm building an Arduino device that collects GPS data. (The context for this device is the subject of many posts coming in the future.)

GPS devices generally follow the NMEA 0183 protocol, and transmit their data as sentences with various kinds of formats. In particular, the GPRMC and GPVTG sentences contain speed over ground   (SOG) data.

I've been collecting data in my apartment. And it's odd-looking. I've also collected data on my boat, and it doesn't seem to look quite so odd. Here's the analysis I used to make a more concrete conclusion.

def sog_study(source_path = Path("gps_data_gsa.csv")):
    with source_path.open() as source_file:
        rdr = csv.DictReader(source_file)
        sog_seq = list(map(float, filter(None, (row['SOG'] for row in rdr))))
        print("max {}\tMean {}\tStdev {}".format(
            max(sog_seq), statistics.mean(sog_seq), statistics.stdev(sog_seq)))


This is a small example of functional composition to build a sequence of SOG reports for analysis.

This code opens a CSV file with data extracted from the Arduino. There was some reformatting and normalizing done in a separate process: this resulted in a file in a format suitable for the processing shown above.

The compositional part of this is the list(map(float, filter(None, generator))) processing.

The (row['SOG'] for row in rdr) generator can iterate over all values from the SOG column. The filter(None, generator) will drop all None objects from the results, assuring that irrelevant sentences are ignored.

Given an iterable that can produce SOG values, the map(float, iterable) will convert the input strings into useful numbers. The surrounding list() creates a concrete list object to support summary statistics computations.

I'm really delighted with this kind of short, focused functional programming.

"But wait," you say. "How is that anything like the SOLID OO design?"

Remember to drop the OO notions. This is functional composition, not object composition.

ISP: The built-in functions all have well-segregated interfaces. Each one does a small, isolated job.

LSP: The concept of an iterable supports the Liskov Substitution Principle: it's easy to insert additional or different processing as long as we define functions that accept iterables as an argument and yield their values or return an iterable result.

For example.

def sog_gen(csv_reader):
    for row in csv_reader:
        yield row['SOG']

We've expanded the generator expression, (row['SOG'] for row in rdr), into a function. We can now use sog_gen(rdr) instead of the generator expression. The interfaces are the same, and the two expressions enjoy Liskov Substitution.

To be really precise, annotation with type hints can clarify this.  Something like sog_gen(rdr: Iterable[Dict[str, str]]) -> Iterable[str] would clarify this.

DIP: If we want to break this down into separate assignment statements, we can see how a different function can easily be injected into the processing pipeline. We could define a higher-order function that accepted functions like sog_gen, float, statistics.mean, etc., and then created the composite expression.

OCP: Each of the component functions is closed to modification but open to extension. We might want to do something like this: map_float = lambda source: map(float, source). The map_float() function extends map() to include a float operation. We might even want to write something like this.  map_float = lambda xform, source: map(xform, map(float, source)). This would look more like map(), with a float operation provided automatically.

SRP: Each of the built-in functions does one thing. The overall composition builds a complex operation from simple pieces.

The composite operation has two features which are likely to change: the column name and the transformation function. Perhaps we might rename the column from 'SOG' to 'sog'; perhaps we might use decimal() instead of float(). There are a number of less-likely changes. There might be a more complex filter rule, or perhaps a more complex transformation before computing the statistical summary.  These changes would lead to a different composition of the similar underlying pieces.