Tuesday, April 24, 2018

Functional Python Programming 2e -- Type Hints!

You might want to look into this: Functional Python Programming - Second Edition.

Let's talk about the type hints, shall we?

Most of the examples have had type hints added. This means running everything through mypy. And it also means running everything through doctest, as well.

More important than the technical steps, there's a change in viewpoint that comes with type hints.

If you follow a variety of Pythonistas on Twitter, you can see some debates on the merits of type-hinting. Some key points:

  • It's hard.
  • It's so hard, only do it if you absolutely need it.
  • It's too verbose
  • It's hard, but it can help.
  • It's really helpful.
  • It represents a "gap" in the language and without run-time type checking, the whole thing is worthless.
The last point a weird view. I work in a shop that's heavily Pythonic. But. You still hear nonsense. Python a very popular language and it's popularity is growing. The popularity of Python isn't like the popularity of a movie where you're not planning on making a living off of it (I know someone who makes their living off the popularity of movies.) The popularity of Python is like the popularity of automobiles or air travel or electricity.

I hear the "a real language would have prevented that with type-checking." And I respond, "Then why do you unit test?" And they don't really have much of an answer. Python has the same workflow as statically type-checked languages, so the "prevention" thing seems to be nonsense.

Moving on.

"It's hard." Anything new is hard. The complaint is vague, so it's *hard* to respond. (Heh.)

Anything like "only do it if you absolutely need it" bothers me because it seems like a passive-aggressive barricade around things. Also. It's vague.

Verbosity

Verbosity in type hints is a real problem. When creating complex objects from built-in types, we often forget to give names to the intermediate object classes.

Consider Dict[Tuple[Tuple[int, int], Tuple[int, int]], float]

It's long. It describes a structure like this {((12, 13), (14, 15)): 2.8284271247461903, ...} 

Writing something like the following d_map() function without hints is easy. Adding hints seems hard.

def d_map(points):
    return {(p1, p2): hypot(p1[0]-p2[0], p1[1]-p2[1]) for p1, p2 in points)}

The declaration became L.. O... N... G... because we ignored the intermediate types.

def d_map(points: List[Tuple[Tuple[int, int], Tuple[int, int]]]) -> Dict[Tuple[Tuple[int, int], Tuple[int, int]], float]:
    return {(p1, p2): hypot(p1[0]-p2[0], p1[1]-p2[1]) for p1, p2 in points)}

These hints, however, doesn't really describe what's happening. The hints elide important details. The hints don't reflect the underlying semantics of the data structure.

One of Python's strengths is the rich collection of first-class data structures with built-in syntax. We can abbreviate some complex concepts into succinct, expressive code.

However.

We shouldn't lose sight of what the succinct code represents. And in this case, it represents some rather complex concepts.
<rant>
Let me sit in my lawn chair and shake my fist in helpless fury at you kids. When I was your age, we sent half a semester of undergraduate work trying to get linked lists, and simple hash mapping to work. Months of work. Later on, as a professional -- years of actual experience -- it took forever to build a binary tree-based collections.Counter definition to gather simple numbers from a flat file. Nowadays, you just slap a Counter down into your code like it's a nothing. It's not a nothing. It's serious, sophisticated software engineering. It's more than Dict[Any, int]. </rant>

What can we do?

When in doubt, Expose the Intermediate Types.

Point = Tuple[int, int]
Leg = Tuple[Point, Point]
Distances = Dict[Leg, float]
def d_map(points: Iterable[Leg]) -> Distances:
    return {(p1, p2): hypot(p1[0]-p2[0], p1[1]-p2[1]) for p1, p2 in points)}

This exposes the details. In some cases, it causes us to rethink using a two-tuple to represent a point. The p1[0] syntax starts to chafe a little. Perhaps this should have been

class Point(NamedTuple):
    x: int
    y: int

That leads to tiny (almost-but-not-quite trivial) simplifications. Instead of building simple tuples for each point, we can now build named Point tuples and use p1.x and p1.y to make the code more civilized.

One consequence of this is actually avoiding (), [], and {} to build tuples and lists. Yes. This is heresy. I seriously recommend using tuple(), list(), dict(), and set() because we can replace them with equivalent types. And yes, I text my mother with the same fingers that wrote that.

"But," you object, "It's objectively LONGER! You didn't save me anything! You're a fraud!"

My first response is, "Correct." It is objectively longer. And "Correct," I didn't really "save" you anything; I'm not sure what you're saving. Lines of code do have a cost, but I think clarity has value. And finally, "Correct," I've often been wrong, and I may be wrong here, too.

I like this because the type definitions are reusable, I think this can add clarity throughout the application.

When this kind of declaration is part of a reusable module, the goodness spreads like smiles and hugs throughout the application. Before long, other functions have been tweaked and everyone is sending each other little teddy-bear hug gifts with rainbow cupcakes.

(Please don't exchange mylar balloons. They're evil. Also, see this.)

tl;dr

When your type hints seem ungainly and large, consider Exposing the Intermediate Types. Break down a big structural type hint into the constituent pieces.

If you had to create a class definition for EVERY variation on list, dict, set, and tuple, what would your new class be named?

If you had to describe the underlying meaning of a class -- separate from it's structure -- what name would you give it?

Picking names is one of the two hardest problems in computing. It isn't easy. (The other hardest problem? Cache invalidation and off-by-one errors.)

Friday, April 6, 2018

Should I use x.__len__() or len(x)?

In the context of providing type hints, someone had a function like this.

def f(x: Sized) -> Whatever: ...

And, since sized objects have a __len__() method it seemed sensible to use x.__len__(). It was a good question about the use of special methods.

My advice is to avoid using the special methods in general. Use them only when defining classes that need to behave like Python objects.

(I'll make an exception for using x.__dict__, to avoid having to introduce an explicit dictionary object when there's one built-in to most objects.)

Use len(x) and be happy.  The function wrapper around a special method is a common Python feature; it occurs in many places; use it.

Tuesday, April 3, 2018

RESTful Web Services Design

This -- REST is the new SOAP -- has so many demolished strawman arguments that it feels like looking at a van Gogh painting of people harvesting wheat.

I won't dive into listing all the strawmen. Most of my responses are approximately "How is that an actual problem?" or "Yes, it was new to you, so?" or "Yes, people disagreed with each other over an implementation choice."

Some of the observations about "proper REST" vs. "bah, that's not really RESTful" point out the differences between expedient REST-like design and really good REST design. Some of these considerations can be helpful.

The one point worthy of deeper thought is the nature of verb-heavy highly-stateful RPC design and RESTful noun-heavy design. The question here is the definition of state and the nature of state change. Some people appear to be enthralled with many nuanced state changes. I've been doing too much data warehouse and functional design where the data is essentially stateless and CRUD rules are refined down to CRD with a rare U under limited circumstances.

And, yes, that means using relatively "stateless" OO design where an object is wrapped inside a new object that includes derived data or a compositions of stateless objects. The following example leverages duck typing to create immutable objects where the class reflects the state of the object.

class Thing:
    def __init__(self, a, b):
        self.a, self.b = a, b
    def set_c(self, c):
        return DerivedThing(self, c)

class DerivedThing:
    def __init__(self, thing: Thing, c):
        self.thing, self.c = thing, c
    @property
    def a(self):
        return self.thing.a
    @property
    def b(self):
        return self.thing.b
    @property
    def value(self):
        return self.a * self.c + self.b

And, yes, I'm not building things which are absolutely stateless because Python has stateful lists and mappings, and web services rely on stateful persistence. And, yes, I reject functional purism because I'm stupid. Can we move on, now?

Something that seemed essential to me (but appears to be confusing from reading complaints about REST) is understanding the notion of "state." One view of state is an aggregation of details. The final state of an object is a reduction over the changes -- akin to a sum(), max(), or min(), or perhaps something more involved like last(). The paucity of REST verbs is not a problem when you understand current state as the end product of applying a journal of previous state change mementos. Each "change", then, isn't a complex Update (REST Put or Patch) where there aren't enough verbs to describe each nuanced change. It's a Create (REST Post) of the next change memento. The RESTful service can eagerly apply the change to compute the current state. Or it can lazily apply the changes to compute the current state.

Some of the blog post cited above sounds like "it was new and I didn't like it." Therefore, read the article, locate the strawmen, and know there will always be someone who will complain. Some of the complaints will have merit, some will be whining about the novelty.

In a RESTful context, I'm a fan of this kind of pattern.

/things
    post:
        summary: Creates a new thing with a and b
    responses:
        201:
            description: thing was created
/things/{id}/c
    post:
        summary: Sets a value of c for an existing thing, previous value is discarded.
    responses:
        201:
            description: c property of thing {id} was set
       
For more useful advice, start here, for example: RESTful API Designing guidelines — The best practices. Articles like this are useful, too: 10 Best Practices for Better RESTful API.

Tuesday, March 27, 2018

Functional Python Programming 2e -- Now With Type Hints

Functional Python Programming, 2nd ed.

This has been fun to cleanup some rambling, reset the math to be sure it's actually right.

And.

Type Hints.

Almost every example has had type hints added.

(And I raised the pylint scores be rearranging some spacing and what-not.)

Bonus. We will be moving the publication date up from June to possibly April. We're still doing technical reviews and what-not, so things aren't *done*.

What was hardest?

Generics, specifically, decorators can have quite complex type hints. Indeed, type hinting raises important questions about trying to write super-generic functions that can handle too wide a spectrum of types.

def some_function(arg):
    if isinstance(arg, dict):
        do_something(arg)
    elif isinstance(arg, list):
        do_something({i: v for i, v in enumerate(arg)})
    else: 
        do_something(dict(arg=arg))


This kind of thing turns out to be ill-advised. It's probably a bad design. More importantly, it's difficult to annotate, making it difficult to discern if it behaves correctly.

In this case, the argument is Union[Dict, Sequence, Any]. I've got a few examples of Union types, but they're rare because I'm not a fan in the first place. And the few places I used them, the complexity of getting past mypy type checks showed that they add risk and cost without a dramatic reduction in complexity.

In this specific case, the some_function() function is merely a type-converting wrapper around the do_something() function. It's probably better to refactor the type conversion responsibility into the clients of some_function().

The arguments about "encapsulation" or "the client shouldn't know that detail" are generally kind of silly. We're all adults here, we generally have to know what's going on with respect to the conversions in order to use the function correctly and write unit tests.

Tuesday, March 20, 2018

HATEOAS is useless? Or not used enough?

See Why HATEOAS is useless and what that means for REST.

The article provides a background leading up to these observations:
  • There are very few good tools to create a REST API using this style
  • There are no clients widely used to consume these types of APIs
The "useless" in the title is more like "not used enough."

There's a multi-part conclusion that may be more helpful if it's fleshed out further. For now, however, it appears that the big problems center around:
  • You still need to write Open Api Specifications (OAS, f/k/a Swagger). I don't think this is bad. The blog post makes it sound like a problem. I think it's essential.
  • You need to put versioning somewhere. The path is less than idea. I'm big on the Accept header containing application-specific MIME types. For example, application/vnd.com.your-name-here.app.json+v1. This doesn't strike me as a problem, either.
  • The whole approach is "closer to RPC than some REST lovers like to admit." I think this point revolves around the way JSON-RPC or SOAP involves some overheads above basic HTTP that are unhelpful. I don't think the "closer to RPC" follows logically from the lack of tooling for HATEOS, but it certainly could be true that a badly-done API might involve too many of the wrong kinds of overheads.
I think there's a hidden strawman here. The "automatic discovery" idea. I don't think this idea makes a lick of sense. Some people think it's implied (or required) by REST, and any failure to provide for fully-automated semantically rich discovery of an API is some kind of failure.

I don't think full semantic discovery is possible or even desirable. 
  • It's not possible because of the problem of assigning names and meanings to resources and verbs in an end-point. The necessary details can only be exposed with a semantically complete ontology and complex SPARQL queries into the ontology to find resources and end-points. 
  • It's not desirable because we replace a human-focused OAS with a complete ontology that has to be rigorously defined, and tested to be sure that all kinds of automated discovery algorithms can understand the provided details. And none of this addresses the actual application, it's all rich, detailed meta-description of the application.
I don't see why we're trying to replace people. API discovery is actually kind of hard. The resources, their relationships, and the verbs for getting or updating those resources involves an essentially difficult knowledge capture and dissemination problem. 

Friday, March 9, 2018

Python Interviews

The #Python Interviews book is out. Mike Driscoll interviewed a bunch of Python experts. And me, too. get 30% off the Amazon paperback version of the book using the code 30PYTHONhttps://goo.gl/5A3uhq

Here's a flavor of how this went:

Driscoll: So how did you end up becoming an author of Python books?
Lott: Most roles in my career more or less just happened to me, but becoming a writer was a conscious decision.
In this case, I had decided that there could be value in teaching the Python language and the associated software engineering skills. I started to collect notes for a book in 2002. By 2010, I had tried self-publishing several books on Python.
When Stack Overflow started, I was an early participant. There were many interesting Python questions. The questions showed gaps where more information was needed about Python specifically and software engineering in general. Over a few years, I answered thousands of questions about Python and somehow built up a large reputation.

Monday, March 5, 2018

Python Interviews -- Coming Soon from Packt

See https://www.packtpub.com/web-development/python-interviews

I'm honored.

I'll be studying what the other folks have to say in here. Being in the Python community means respecting other's views. And that means understanding them.

This looks like fun because it isn't *deeply* technical, it's about people and technology.

Tuesday, January 30, 2018

The SQL-based relational database isn't perfection? Whoa if true

Yes, there are people for whom document databases (and the file system) are confusing and weird.

I was sent this: Relational Algebra Is the Root of SQL Problems which is really brilliant and provides some helpful concrete examples of stuff SQL is really bad at.

The accompanying email was filled with nonsense about how important and world-changing SQL was.

I can't disagree. Back when disk was very expensive and very small, the SQL-based join strategies where essential for micro-managing every bit of data. Literally. Every Bit.

And then we would denormalize the structure for performance reasons. Because we always knew the SQL was terrible at a fairly large number of things.

Those days are behind us. We can now chose to use a document database, and make our lives simpler. Storage is relatively inexpensive, and the labor to normalize and denormalize data doesn't create significant value. The need to write stored procedures to turn a single conceptual operation into a bunch of inserts and updates was a symptom that this wasn't the best approach.

I've had many "But what about..." conversations regarding document databases.

"What about ad-hoc queries in SQL?"

- Do you really do these without writing a Python script or creating a Pandas dataframe? I doubt it. But. If you really think you'll do this, most document stores either support a modified SQL or Javascript. And yes, you hate Javascript, duly noted. I hate SQL, so we're even there.

"What about joins?"

- It's a space-saving technique. We don't need the overheads to save the space. The "update anomalies" still require careful design, and may lead to some decomposition of data into multiple documents. But the ruthless normalization shouldn't be seen as a requirement.

"What about the schema?"

- It's brittle and schema migration creates a lot of low-value labor. We can use Python JSONSchema to validate documents. See NoSQL Database doesn't Mean No Schema.

Transactional v. Analytical

It requires some care to understand the distinction between "transactional" and "analytical" uses for data. While folks try to leverage this distinction, it's a spectrum not a distinction.

A lot of data collection is a simple sequence of event documents. These have no sensible state change, so they're not really transactional. They are often created by concurrent processes where locking prevents corruption, so transactions *seem* helpful. Except, of course, the file system writes can be trivially sharded by process ID and then unified later. And all document databases serialize document writes from multiple client processes, so there's no value to writing a relational database.

Some data operations are properly stateful. By normalizing our tables, moving from consistent state to consistent state is made complex. Which requires a defined transaction as a work-around. And don't get me started on replication and two-phase commit as yet another layer of complexity on top of transactions.

A document database allows us to skip over 1NF. We can think of a document as being a row in a table where the data types are complex data structures involving mappings, sequences, strings, numbers, booleans, and nulls. (See JSON Schema.) A lot of multi-step SQL transactions are operations on several children of a common parent. If the parent was persisted as a single document, there wouldn't be multiple operations, an atomic MongoDB update operation can make complex rewrites to a complex document.

We can contrive a design where state changes must be coordinated and the data cannot be colocated in a single document. It's not difficult to stipulate enough requirements to make single documents difficult. The presence of these contrived requirement, however, doesn't suddenly invalidate document datastores for transactional data. In the SQL world, the idea of long-running and reversible long-running transactions has always been a horrible problem. Allowing stacked "undo" for the user means either creating a chain of Memento objects that can recover previous state, or having numerous flags and indicators on each record, allowing the state to be reversed. Some design problems are really hard. And the SQL model seems to make them harder.

The core ACID concepts of always consistent is -- in practice -- nonsense. As soon as we have to consider "isolation levels" and "read consistency" it becomes clear that there is no consistent state unless all transactions and queries are serialized via exclusive "whole database" locking. Competent DBA's know that long-running analytic queries performed concurrently with transactional updates can't use locking, and must tolerate inconsistencies in the database.

It's common practice to do data extracts so that analytic queries aren't working against the (inconsistent) transactional data. In this case, the frequency of extracts is the timing of "eventual consistency" promised by the BASE concept.

Bottom Line: Relational ACID rules are almost always broken in practice by read consistency rules and extracts to analytic databases. Analytical data is always based on eventual consistency expectations. The batch extracts means "eventually" is measured in hours. A document data store can often create consistency in milliseconds. (MongoDB primary failure, voting, and secondary promotion to primary relies on a 10-second heartbeat, so it takes time to discover and repair.)

Also

A second email detailed their amazement (Amazing! Wow! Unbelievable! You Must Inform The World Of This!) that analytic processing of data is actually faster and simpler using the file system. The very idea of HDFS was so amazing that they were amazed.

Somehow, the idea of the raw filesystem as being really, really fast was the source of much amazement.

I'm glad they're making an effort to catch up. I'm glad they're seeing the relational model as a bad choice that has a limited number of use cases. Mostly, relational databases are useful for an organization can't write API's to handle the integrity issues.

To SQL or NoSQL? That's the database question | Ars Technica

Tuesday, January 23, 2018

PyCon 2018 Program Committee


I was "volunteered" by a colleague to help the program committee for PyCon 2018. I rarely think of myself as qualified for this kind of thing. Yes. I have six books on Python (with a seventh on the way) but the PSF folks are brilliant and dedicated and hard-working, and I'm just a slob.

Yes, I do get to help the community by reviewing almost 700 individual proposals. Some good. Some really good. Some which we *must* hear. 

The collateral benefit? 

Side reading.

My browser history is filled with things I hadn't known existed. 

Next time, I need to get started *before* the deadline so I can have a little more interaction with the authors. There were a few outlines where we could only discuss the possibility of making a change if the proposal was accepted. 

In particular, there seemed to be a *lot* of Machine Learning-Bayesian-Deep Learning-Recommender-Data Science pitches that had abbreviated outlines. They tend to all look alike to someone who's not an expert. Five bullet points: the author's background, the problem domain, ML (or modeling or whatever), a Jupyter notebook showing the results, and a conclusion.  Providing some distinct angle to the pitch (other than the problem domain) might help me understand them more fully. It seemed best to defer to the consensus on these.

I've been learning to live with my personal bias against meta-talks about building community. A presentation on community building at a community event seems redundant to me. But that doesn't mean they're not thorough, articulate talks that will be useful to others. Since I have a seat at the table, I'm biased. The Python tie-in feels weak, but our code of conduct (Open, Considerate, Respectful) means PyCon really is the place for more of this. Most importantly, they're objectively solid talks. (And -- as a member of the the over-represented old male nerd class, I do need to listen more.)

It's been enlightening. And the conference will rock.

Tuesday, January 16, 2018

I've decided on Windows -- Please help justify my choice

After many words, the email chain I received netted out to this:
  1. I can't teach myself data science on my crappy old Windows machine.
  2. I've decided to get a new Windows machine. Here are the specs.
My response was a mixture of incredulity and bafflement.

It appears that two things happened while I wasn't paying attention.
  1. Apple ceased to exist.
  2. The cloud ceased to exist.
I'm aware that many people think the Apple alternative is a non-starter. They are sure that after 40+ years selling computes, Apple is doomed, and we'll only have Windows on the desktop. Seriously.

Some people have farcical explanations for why Apple Cannot be Taken Seriously.

In this specific instance, there was a large investment in Python and Java that somehow couldn't be rebuild in Mac OS X. Details were explicitly not provided. Which is a way of saying there were no tangible "requirements" for this upgrade. Just specification numbers.

Import note. None of this involved "data" or "science." That was the baffling part. No objective measurement of anything. No list of software titles. No projects. No dataset sizes. Nothing.

The anti-cloud argument was even stranger than the anti-Mac argument.

Somehow, a super-large AWS server -- let's say it was a x1.16xlarge -- being used an hour a day (365 day*1 hr/day*$1.82/hr = $664) was deemed *more* expensive that a 64Gb 6-core home-based machine that would sit idle 23 hours each day.

The best part of $664/yr being *more* expensive?

Expert Judgement.

No "data". No "science". No measurement. No supporting details.

I wish I'd kept the email describing how someone who knew something said something about pricing.  It was marvelous Highest Paid Person's Opinion nonsense.

AFAIK, they were using 8,766 hours per year to compare AWS computing vs. at-home computing. This meant that an m5.4xlarge should be considered as costing $1,939 each year. Presumably because they'd never shut it off.

It included terms like "half-way decent performance."

There's a depth of wrongness to this that's hard to characterize beyond no "data" and no "science".

Tuesday, January 9, 2018

Code Rewrites and How They Create Value -- Stop Fighting Against It

TL;DR -- To remove doubts and questions, rewrite it.

Many, many people are confronted with the request to maintain someone else's code.

Either it's open source, and you have to make formal PR's visible to the world.

Or it's "enterprise" in-house software, and you have to make PR's visible to a work team.

Or.

It's "work group" in-house software, and you have source that may not be under proper source-code control installed on a server where you're taking over someone else's carefully built structure of porcelain components.

In the first case -- public code -- a rewrite is a challenge. People depend on it having a well-known interface. A small change here could alienate large swaths of the user community. On the other hand. A small change here could make the project *more* useful to *more* people. This is challenging. My advice here is limited.

When we look at enterprise software, however, a rewrite won't have quite the same "blast radius."

When we look at work-group software, there is no blast radius.

Benefits of a Rewrite

Why rewrite? Three reasons.

  1. You can understand it. (This is HUGE.)
  2. You can make it objectively better (i.e., higher PyLint scores, better documentation coverage.)
  3. You can add or expand the test cases. 

The first and foremost reason -- understanding -- can't be understated.

And.

It's a HUGE fight every time. The standard argument is "If It Ain't Broke, Don't Fix It."

This is, of course, based on misunderstanding the level of "broke." A delicately-balanced tower of porcelain components that worked once last month is -- in effect -- already pre-broken if any change will disturb the structure and ruin everything.

There are lots of examples.

  • The app only works with Pandas 0.12.0 and will not work with 0.13.x or the 0.22.0 you have in your default Conda environment.
  • The app only works when you provide --someoption=False, and no one can figure out why.
  • Some test cases have @skip because they don't work. But should.
  • The setup.py doesn't work and you can only run it using PYTHONPATH.
  • The default logging initialization is "somehow" not right and requires a manual override in the app.

I know. I've created all of these problems.

On one hand, we have management: "It ain't broke."

On the other hand, we have everyone else: "It's a fragile nightmare of pre-broken components that cannot be touched."

The Script

Here's how it plays out. In the Real World.

Management: "It ain't broke. Don't fix it."
You: "I can't make it work."
Management: "It ran last month."
You: "I made one small change and it doesn't run this month."
Management: "Back out the change."
You: "The results are then useless."

After much Grrr and Gnashing...

Management: "Identify the problems and we'll prioritize."
You: "Here are a dozen things."
Management: "These are too vague. Be more specific."
You: "Here are a score of things."
Management: "You're in the weeds. Bring it up a level to where business people can understand it."

After more Grrr and Gnashing...

Management: "What's the smallest change we can get away with?"
You: "The one I made that broke everything."
Management: "Let's have lots of peer review and design walkthroughs."
You: "Cool, then you'll see how broken it is."
Management: "Okay. Let's not. Instead, make the smallest change you can."
You: "I made one small change and it doesn't run this month."
Management: "Back out the change."
You: "The results are then useless."

Can we break the cycle of uselessness?

Depends.

We may be struggling with management folks who are set against fixing what's obviously broken. They're living a rich fantasy life that you can't really change.

However.

The Grrr and Gnashing part of the dialog represents time in which useful stuff can be done. Specifically. Rewrites.

Rewriting Strategy

There are three important parts of rewriting.
  • Understand what it's doing and why.
  • Describe it with test cases.
  • Make it objectively clear (i.e., high PyLint scores, complete documentation, etc.)
The effort often involves multiple passes. I like to describe it as Test-Driven Reverse Engineering (TDRE).

  1. Create (or expand) the test cases.
  2. Rewrite the code.
  3. Repeat until it's better.
It's essential to do these in order. Without test cases, rewrites are only more breakage. With test cases, rewrites are guaranteed to produce the same results as the previous mess of horrible code.

Sometimes the test cases are really a kind of "system test" where the whole application is run against some known inputs to produce some expected outputs. This is better than nothing. It supports building fine-grained unit test cases that conform to the system test case.

Other times, the test cases may be proper unit tests and the rewrites can be at a finer level of granularity. In this case, test coverage may have to be expanded to include the fragile bits. In some cases, the rewrites may be necessary to make the code testable in the first place.

Adding test cases is objectively valuable work.

Even the dumbest of "It ain't broke" managers can recognize this value. The rewrites are a beneficial consequence of adding test cases. You may be able to achieve a goal of fixing something without ever being seen as "fixing" it. All you did was improve test case coverage and improve the "design for testability."

Costs and Benefits

Consider the cost of struggling vs. the cost of rewriting.

It's the same 80 hours of effort.

In one case, you struggled with something management insisted wasn't broken. Eventually, you found ways to make it work.

In the other case, you rewrite something management insisted wasn't broken. In the end, you actually understood it and created objective improvements in the code.

Which is better?