There are times when a "micro framework" is actually useful. I wasn't easily convinced that this could be true. Big framework or die trying. Right?
Maybe not so right.
My primary example of a micro framework's value is a quick demo site to show how some API's are going to be consumed.
I've been avoiding an Angular.js app. We're going to be supporting the user experience with a sophisticated Angular.js app. But, as a back-end developer, I don't want to try to write demo pages in the proper app framework. There's too many ways to screw this up because I'll miss some key UX feature. Instead, I want to write fake pages that show a considerably simplified version of consuming an API. Sort of "suggestion" pages to clarify how the API's fit together.
To make it even more complex than necessary, I'm not interested in learning Angular.js, and I'm not interested in figuring out how it works. Running node.js, bower, npm, etc., etc., is too much.
[And Angular.js is just one version of the front-end. There's also mobile web, mobile native, tablet native, and God-alone-only-knows-what-all-else, to name a few. Each unique.]
Several sprints back, I slapped together two fake forms using bootstrap for layout and hosted them with Bottle. The navigation and framework look were simply copied from a screenshot and provided as static graphics. Lame, but acceptable. All that matters is getting the proper logo to show up.
The problem is that the sequence of API requests has grown since then. The demo grew to where we need a session so that alternative sequences will provide proper parameters to the APIs. We're beyond "Happy Path" interactions and into "what-if" demonstrations to show how to get (or avoid) a 404.
Bottle started with the significant advantage in fitting entirely into a single .py module. The barrier to entry was very low. But then the forms got awkwardly complex and Jinja2 was required. Now that sessions are required, the single module benefit isn't as enticing as it once was.
I've been forced to upgrade from Bottle to Flask. This exercise points out that I should have started with Flask in the first place. Few things are small enough for Bottle. In some ways, the two are vaguely similar. The @route() annotation being the biggest similarity. In other ways, of course, the two are wildly different. There's only a single Flask, but we can easily combine multiple Bottles into a larger, more comprehensive site. I like the composability of Bottles, and wish Flask had this.
The Flask Blueprints might be a good stand-in for composing multiple Bottles. Currently, though, each functional cluster of API's has their own unique feature set. The bigger issue is updating the configuration to track the API's through the testing and QA servers as they march toward completion. Since they don't move in lock-step, the configuration is complex and dynamic.
The transparent access to session information is a wonderful thing. I built a quick-and-dirty session management in Bottle. It used shelve and a "simple" cookie. But it rapidly devolved to a lot of code to check for the cookie and persist the cookie. Each superficially simple Bottle @route() needed a bunch of additional functionality.
The whole point was to quickly show how the API's fit together. Not reinvent Yet Another Web Framework Based On Jinja2 and Werkzeug.
Django seems like too much for this job. We don't have a model; and that's the exact point. Lacking a database model doesn't completely break Django, but it makes a large number of Django features moot. We just have some forms that get filled in for different kinds of events and transactions and searches and stuff. And we need a simple way to manage stateful sessions.
Omitted from consideration are the other dozen-or-so frameworks listed here: http://codecondo.com/14-minimal-web-frameworks-for-python/. This is a great resource for comparing and contrasting the various choices. Indeed, this was how I found Bottle to begin with.
Rants on the daily grind of building software. This has been moved to https://slott56.github.io. Fix your bookmarks.
Bio and Publications
▼
Thursday, January 29, 2015
Tuesday, January 20, 2015
Webcast Wednesday
Be there: http://www.oreilly.com/pub/e/3255
Of course, I've got too many slides. 58 slides for a 60 minute presentation. That's really about 2 hours of material. Unless people have questions, then it's a half-day seminar.
Seriously.
I think I've gone waaaay too far on this. But it's my first one, and I'd hate to burn through all eight slides, take a few questions and be done too soon.
If this goes well, perhaps I'll see if I can come up with other 1-hour topics.
I worry a great deal about rehashing the obvious.
On the other hand, I'm working with a room full of newbies, and I think I could spend several hours on each of their questions.
And straightening out their confusions.
Case in point.Not directly related to the webcast.
One of my colleagues had seen a webcast which described Python's &, |, and ~ operators, comparing them with and, or and not.
I'm not 100% sure, but... I think that this podcast -- I'm getting this second-hand; it's just hearsay -- showed that there's an important equivalence between and and &.
This is true, but hopelessly obscure. Since & has a higher priority than the comparison operators, there will be serious confusion when one fails to parenthesize properly.
Examples like this abound:
Further, the fact that & can't short-circuit had become confusing to the colleague. I figured out some of what was going on when trying to field some seemingly irrelevant questions on "Why are some operators more efficient?" and "How do you know which to use?"
Um. That's not really the point. There's no confusion if you set the bit-fiddling operators aside.
The point is that and, or, not, and the if-else conditional expression live in their own domain of boolean values. The fact that &, |, ^, and ~ will also operate on boolean values is a kind of weird duplication, not a useful feature. The arithmetic operators also work on booleans. Weirdly.
The Python rules are the rules; it makes sense for True&True to yield True. Results depend on the operands. It would be wrong in that sense for True&True to be 1. But it would also fit the concept of these operators a little better if they always coerced bool to int. This happens for * and +: True+True == 2.
Why can't it be true for & and |? It would reduce potential confusion.
I'm sure the person who implemented __and__(), __or__(), __xor__(), and __invert__() was happy to create a parallel universe between and and &. I'm not sure I agree.
And perhaps I should have a webcast on Python logic. It seems like a rehash of fundamentals to me. But I have colleagues confused by fundamentals. So perhaps I'm way wrong about what's fundamental and what's useful information.
Of course, I've got too many slides. 58 slides for a 60 minute presentation. That's really about 2 hours of material. Unless people have questions, then it's a half-day seminar.
Seriously.
I think I've gone waaaay too far on this. But it's my first one, and I'd hate to burn through all eight slides, take a few questions and be done too soon.
If this goes well, perhaps I'll see if I can come up with other 1-hour topics.
I worry a great deal about rehashing the obvious.
On the other hand, I'm working with a room full of newbies, and I think I could spend several hours on each of their questions.
And straightening out their confusions.
Case in point.
One of my colleagues had seen a webcast which described Python's &, |, and ~ operators, comparing them with and, or and not.
I'm not 100% sure, but... I think that this podcast -- I'm getting this second-hand; it's just hearsay -- showed that there's an important equivalence between and and &.
This is true, but hopelessly obscure. Since & has a higher priority than the comparison operators, there will be serious confusion when one fails to parenthesize properly.
Examples like this abound:
>>> 3 == 3 & 4 < 5
False
>>> (3 == 3) & (4 < 5)
True
Further, the fact that & can't short-circuit had become confusing to the colleague. I figured out some of what was going on when trying to field some seemingly irrelevant questions on "Why are some operators more efficient?" and "How do you know which to use?"
Um. That's not really the point. There's no confusion if you set the bit-fiddling operators aside.
The point is that and, or, not, and the if-else conditional expression live in their own domain of boolean values. The fact that &, |, ^, and ~ will also operate on boolean values is a kind of weird duplication, not a useful feature. The arithmetic operators also work on booleans. Weirdly.
The Python rules are the rules; it makes sense for True&True to yield True. Results depend on the operands. It would be wrong in that sense for True&True to be 1. But it would also fit the concept of these operators a little better if they always coerced bool to int. This happens for * and +: True+True == 2.
Why can't it be true for & and |? It would reduce potential confusion.
I'm sure the person who implemented __and__(), __or__(), __xor__(), and __invert__() was happy to create a parallel universe between and and &. I'm not sure I agree.
And perhaps I should have a webcast on Python logic. It seems like a rehash of fundamentals to me. But I have colleagues confused by fundamentals. So perhaps I'm way wrong about what's fundamental and what's useful information.
Thursday, January 15, 2015
Chapter 12 Alternate Example - Normalization and Decorators
In the forthcoming Functional Python Programming (https://www.packtpub.com/application-development/functional-python-programming) I was pressured by one of the technical reviewers to create a better example of composite function creation with decorators.
This was a difficult request. First, of course, "better" is poorly defined. More importantly, the example in the book is extensive and includes the edge-of-the-envelope "don't do this in real code" parts, too. It's important to be thorough. Finally, it's real-world data cleansing code. It's important to be pragmatic, but, it's kind of boring. I really do beat it into submission showing simple decorators, parameterized decorators, and crazy obscurely bad decorators.
In this case, "better" might simply mean "less thorough."
But, perhaps "better" means "less focused on cleansing and more focused on something else."
On Decoration
The essence of the chapter -- and the extensive example -- is that we can use decorators as higher-order functions to build composite functions.
Here's an alternative example. This will combine z-score normalization with another reduction function. Let's say we're doing calculations that require us to normalize a set of data points before using them in some reduction.
Normalizing is the process of scaling a value by the mean and standard deviation of the collection. Chapter 4 covers this in some detail. Reductions like creating a sum are the subject of Chapter 6. I won't rehash the details of these topics in this blog post.
Here's another use of decorators to create a composite function.
The essential feature of the @normalize(mean, stdev) decorator is to apply the normalization to the vector of argument values to the original function. We can use it like this.
W've create a norm_list() function which applies a normalization to the given values. This function is a composite of normalization plus list().
Clearly, parts of this are deranged. We can't even define the norm_list() function until we have mean and standard deviation parameters for the samples. This doesn't seem appropriate.
Here's a slightly more interesting composite function. This combines normalization with sum().
We've defined the normalized sum function and applied it to a vector of values. The normalization has parameters applied. Those parameters are relatively static compared with the parameters given to the composite function.
It's still a bit creepy because we can't define norm_sum() until we have the mean and standard deviation.
It's not clear to me that a more mathematical example is going to be better. Indeed, the limitation on decorators seems to be this:
This was a difficult request. First, of course, "better" is poorly defined. More importantly, the example in the book is extensive and includes the edge-of-the-envelope "don't do this in real code" parts, too. It's important to be thorough. Finally, it's real-world data cleansing code. It's important to be pragmatic, but, it's kind of boring. I really do beat it into submission showing simple decorators, parameterized decorators, and crazy obscurely bad decorators.
In this case, "better" might simply mean "less thorough."
But, perhaps "better" means "less focused on cleansing and more focused on something else."
On Decoration
The essence of the chapter -- and the extensive example -- is that we can use decorators as higher-order functions to build composite functions.
Here's an alternative example. This will combine z-score normalization with another reduction function. Let's say we're doing calculations that require us to normalize a set of data points before using them in some reduction.
Normalizing is the process of scaling a value by the mean and standard deviation of the collection. Chapter 4 covers this in some detail. Reductions like creating a sum are the subject of Chapter 6. I won't rehash the details of these topics in this blog post.
Here's another use of decorators to create a composite function.
def normalize( mean, stdev ): normalize = lambda x: (x-mean)/stdev def concrete_decorator( function ): @wraps(function) def wrapped( data_arg ): z = map( normalize, data_arg ) return function( z ) return wrapped return concrete_decorator
The essential feature of the @normalize(mean, stdev) decorator is to apply the normalization to the vector of argument values to the original function. We can use it like this.
>>> d = [ 2, 4, 4, 4, 5, 5, 7, 9 ] >>> from Chapter_4.ch04_ex4 import mean, stdev >>> m_d, s_d = mean(d), stdev(d) >>> @normalize(m_d, s_d) >>> def norm_list(d): ... return list(d) >>> >>> norm_list(d) [-1.5, -0.5, -0.5, -0.5, 0.0, 0.0, 1.0, 2.0]
W've create a norm_list() function which applies a normalization to the given values. This function is a composite of normalization plus list().
Clearly, parts of this are deranged. We can't even define the norm_list() function until we have mean and standard deviation parameters for the samples. This doesn't seem appropriate.
Here's a slightly more interesting composite function. This combines normalization with sum().
>>> @normalize(m_d, s_d) >>> def norm_sum(d): ... return sum(d) >>> >>> norm_sum(d) 0.0
We've defined the normalized sum function and applied it to a vector of values. The normalization has parameters applied. Those parameters are relatively static compared with the parameters given to the composite function.
It's still a bit creepy because we can't define norm_sum() until we have the mean and standard deviation.
It's not clear to me that a more mathematical example is going to be better. Indeed, the limitation on decorators seems to be this:
- The original (decorated) function can have lots of parameters;
- The functions being composed by the decorator must either have no parameters, or have very static "configuration" parameters.
If we try to compose functions in a more general way -- all of the functions have parameters -- we're in for problems. That's why the data cleansing pipeline seems to be the ideal use for decorators.
Thursday, January 8, 2015
The Python Challenge
See http://www.pythonchallenge.com
Addicting. For folks (like me) who like this kind of thing. For others, perhaps just dumb. Or infuriating.
Years ago -- many, many years ago -- I vaguely remember a similar game with a name like "insanity" or something like that. Now there's http://www.notpron.com and http://www.weffriddles.com. All of these are "show the page source" HTML games. These games are a kind of steganography: the page your browser renders isn't what you need to see.
What's important about the Python Challenge is that it's not specifically about Python. Any programming language would do. Although I suspect that folks who don't know Python will have a difficult time with some of the puzzles. I found that having Pillow was essential for problems 7 and 11. I'm sure there are packages as powerful as PIL/Pillow for other languages.
Also, one of the hints included dated Python 2.7 code. The rest of the problems, however, seem to fit perfectly well with Python 3.4.
I wasted a morning getting to challenge 11. It was a ton of fun.
Challenge 12 was the first of the show-stoppers. The hint "evil1.jpg" is beyond subtle. Let me add this hint: This is the first puzzle where the pictures have digits. Perhaps there are related pictures.
I spent hours studying and rearranging and filtering and enhancing evil1.jpg before I finally broke down and searched for a hint. The hint -- of course -- included the whole solution, so I had to skim the code to figure out what I'd missed.
Challenges 14, 15, and 16 require additional hints, also. 14, for example, needs a reminder that the pixels need to be spiraled. Challenge 15 barely requires minimal programming and a lot of Google searching for famous people's birthdays. Challenge 16's hint is as opaque as the picture. It involves restructuring the image. But. I had to resort to reading more of the http://intelligentgeek.blogspot.com/2006/03/python-challenge-16-ahh-i-finally.html than for other problems.
I have chapters to review. I really shouldn't be playing around with silliness like this.
In spite of that, let me just say, that reading about the "Look-and-Say" sequence was a bunch of fun. See http://oeis.org/A005150. Whatever you do, avoid reading this: http://archive.lib.msu.edu/crcmath/math/math/c/c671.htm; it won't help you with the Python Challenge at all. But it's interesting. And a huge time-waster. This particular challenge was more like Project Euler problems. [Project Euler is back up and running, BTW.]
Here's my variation on the Conway sequence theme:
I'm a fan of generator functions. A big fan.
The interesting part is that we can do run-length encoding for the look-and-say function relatively simply using the "buffered generator" design pattern.
1. Seed the buffer with the head of the sequence, next(d_iter)
2. For each item in the tail of the sequence:
a. If it matches, count.
b. If it doesn't match, yield the interim reduction and reset the counter.
3. Yield the tail reduction.
This design pattern seems to occur in a number of contexts outside games and abstract math.
Addicting. For folks (like me) who like this kind of thing. For others, perhaps just dumb. Or infuriating.
Years ago -- many, many years ago -- I vaguely remember a similar game with a name like "insanity" or something like that. Now there's http://www.notpron.com and http://www.weffriddles.com. All of these are "show the page source" HTML games. These games are a kind of steganography: the page your browser renders isn't what you need to see.
What's important about the Python Challenge is that it's not specifically about Python. Any programming language would do. Although I suspect that folks who don't know Python will have a difficult time with some of the puzzles. I found that having Pillow was essential for problems 7 and 11. I'm sure there are packages as powerful as PIL/Pillow for other languages.
Also, one of the hints included dated Python 2.7 code. The rest of the problems, however, seem to fit perfectly well with Python 3.4.
I wasted a morning getting to challenge 11. It was a ton of fun.
Challenge 12 was the first of the show-stoppers. The hint "evil1.jpg" is beyond subtle. Let me add this hint: This is the first puzzle where the pictures have digits. Perhaps there are related pictures.
I spent hours studying and rearranging and filtering and enhancing evil1.jpg before I finally broke down and searched for a hint. The hint -- of course -- included the whole solution, so I had to skim the code to figure out what I'd missed.
I have chapters to review. I really shouldn't be playing around with silliness like this.
In spite of that, let me just say, that reading about the "Look-and-Say" sequence was a bunch of fun. See http://oeis.org/A005150. Whatever you do, avoid reading this: http://archive.lib.msu.edu/crcmath/math/math/c/c671.htm; it won't help you with the Python Challenge at all. But it's interesting. And a huge time-waster. This particular challenge was more like Project Euler problems. [Project Euler is back up and running, BTW.]
Here's my variation on the Conway sequence theme:
def say( digits ):
def run_lengths(digits):
d_iter= iter(digits)
c, d0 = 1, next(d_iter)
for d in d_iter:
if d0 == d:
c += 1
else:
yield str(c)+d0
c, d0= 1, d
yield str(c)+d0
return "".join(run_lengths(digits))
I'm a fan of generator functions. A big fan.
The interesting part is that we can do run-length encoding for the look-and-say function relatively simply using the "buffered generator" design pattern.
1. Seed the buffer with the head of the sequence, next(d_iter)
2. For each item in the tail of the sequence:
a. If it matches, count.
b. If it doesn't match, yield the interim reduction and reset the counter.
3. Yield the tail reduction.
This design pattern seems to occur in a number of contexts outside games and abstract math.
Thursday, January 1, 2015
eLearning eXtravaganza
Visit Packt Publishing today for the $5 eBook Bonanza.
What better way to celebrate the new year?
Read. Learn. Grow.
Find out more at http://www.packtpub.com/packt5dollar
#packt5dollar
What better way to celebrate the new year?
Read. Learn. Grow.
Find out more at http://www.packtpub.com/packt5dollar
#packt5dollar