Bio and Publications

Tuesday, May 28, 2019

Rules for Debugging

Here's the situation.

Someone wrote code. It didn't do what they assumed it would do.

They come to me for help.

Here are my rules for debugging. All of them.

1. Try something else.



I don't have any other or more clever advice. When I look at someone's broken code, I'm going to suggest the only thing I know. Try something else.

I can't magically make broken code work. Code's not like that. If it doesn't work, it's wrong, usually in a foundational way. Usually, the author made an assumption, followed through on that assumption, and was astonished it didn't work.

A consequence of this will be massive changes to the broken code. Foundational changes.

When you make an assumption, you make an "ass" out of "u" and "mption".

Debugging is a matter of finding and discarding assumptions. This can be hard. Some people think their assumptions are solid gold and write long angry blog posts about how a language or platform or library is "broken."

The rest of us try something different.

My personal technique is to cite evidence for everything I think I'm observing. Sometimes, I actually write it down -- on paper -- next to the computer. (Sometimes I use the Mac OS Notes app.) Which lines of code. Which library links. Sometimes, i'll include in the code as references to documentation pages.

Evidence is required to destroy assumptions. Destroying assumptions is the essence of debugging.

Sources of Assumptions

I'm often amazed at how many people don't read the "But on Windows..." parts of the Python documentation. Somehow, they're willing to assume -- without evidence -- that Windows is POSIX-compliant and behaves like Linux. When things don't follow their assumed behavior, and they're using Windows, it often seems like they've compounded a bunch of personal assumptions. I don't have too much patience at this point: the debugging is going to be hard.

I'm often amazed that someone can try to use multiprocessing.apply_async() without reading any of the example code. What I'm guessing is that assumptions trump research, making them solid gold, and not subject to questioning or locating evidence. In the case of multiprocessing, it's important to look at code which appears broken and compare it, line-by-line with example code that works.

Comparing broken code with relevant examples is -- in effect -- trying something else. The original didn't work. So... Go to something that does work and compare the two to identify the differences.

Tuesday, May 14, 2019

PyCon 2019

There are some things I could say.

But.

You can come to understand it yourself, also.

Go here: https://www.youtube.com/channel/UCxs2IIVXaEHHA4BtTiWZ2mQ

Start with the keynotes.  https://www.youtube.com/channel/UCxs2IIVXaEHHA4BtTiWZ2mQ/search?query=keynote

For me, one of the top presentations was this https://www.youtube.com/watch?v=9G2s1TN9QQY
There are several closely related, but I found this very helpful.

Tuesday, May 7, 2019

Fiction Writers and Query Letters

See http://flstevens.itmaybeahack.com/writing-world-building-and/ for some back-story on F. L. Stevens and the need to write a *lot* of query letters to agents for fiction. (The non-fiction industry is entirely different; there are acquisition editors who look for technical contributors.)

There's a tiny possibility of a Query Manager Tool (of some kind) on a writer's desktop.

Inputs include:
  • Template letter.
  • A table of customizations per Agent. The intent is to allow more than simple name and pronoun changes. This includes Agent-specific content requirements. These vary, and can include the synopsis, first chapter, first 10 pages, first 50 pages. They're not email attachments; they have to be part of the main body of the email, so they're easy to prepare.
  • Another table of variant pitches to plug into the template. There are a lot of common variations on this theme. Sizes vary from as few as 50 words to almost 300 words. Summaries of published works seem to have a median size of 140 words. A writer may have several (or several dozen) variants to try out.
This can't become a spam engine (agents don't like the idea of an impersonal letter.) 

Also. A stateful list of agents, queries, and responses is important. Some Agents don't necessarily respond to each query; they often offer a blanket apology along the lines of "if you haven't heard back in sixty days, consider it a rejection." So. You want to try again. Eventually. Without being rude. And if you requery, you have to send a different variant on the basic pitch.

Some Agents give a crisp "nope" and you can update your list to avoid requerying.

For new authors (like F. L. Stevens,) there's a kind of manual query tracking mess. It's not really horrible. But it is annoying. Keeping the database up-to-date with responses is about as hard as a tracking spreadsheet, so, there's little value to a lot of fancy Python software.

The csv, string.Temple, email and smtplib components of Python make this really easy to do.  While I think it would be fun to write, creating this would be work avoidance.