With Early Release ebooks, you get books in their earliest form—the author’s raw and unedited content as they write—so you can take advantage of these technologies long before the official release of these titles.
This will be the 2nd chapter of the final book. The GitHub repo is available at https://github.com/hjwp/book-example.
If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material within this chapter, please reach out to the author at [email protected].
Let’s adapt our test, which currently checks for the default Django "it worked" page, and check instead for some of the things we want to see on the real front page of our site.
Time to reveal what kind of web app we’re building: a to-do lists site! I know, I know, every other web dev tutorial online is also a to-do lists app, or maybe a blog or a polls app. I’m very much following fashion.
The reason is that a to-do list is a really nice example. At its most basic it is very simple indeed—just a list of text strings—so it’s easy to get a "minimum viable" list app up and running. But it can be extended in all sorts of ways—different persistence models, adding deadlines, reminders, sharing with other users, and improving the client-side UI. There’s no reason to be limited to just ``to-do'' lists either; they could be any kind of lists. But the point is that it should allow me to demonstrate all of the main aspects of web programming, and how you apply TDD to them.
Tests that use Selenium let us drive a real web browser, so they really let us see how the application functions from the user’s point of view. That’s why they’re called functional tests.
This means that an FT can be a sort of specification for your application. It tends to track what you might call a User Story, and follows how the user might work with a particular feature and how the app should respond to them.
Functional Test == Acceptance Test == End-to-End Test
What I call functional tests, some people prefer to call acceptance tests, or end-to-end tests. The main point is that these kinds of tests look at how the whole application functions, from the outside. Another names are black box test or closed box test because the test doesn’t know anything about the internals of the system under test.
FTs should have a human-readable story that we can follow. We make it explicit using comments that accompany the test code. When creating a new FT, we can write the comments first, to capture the key points of the User Story. Being human-readable, you could even share them with nonprogrammers, as a way of discussing the requirements and features of your app.
TDD and agile or lean software development methodologies often go together, and one of the things we often talk about is the minimum viable app; what is the simplest thing we can build that is still useful? Let’s start by building that, so that we can test the water as quickly as possible.
A minimum viable to-do list really only needs to let the user enter some to-do items, and remember them for their next visit.
Open up functional_tests.py and write a story a bit like this one:
from selenium import webdriver
browser = webdriver.Firefox()
# Edith has heard about a cool new online to-do app.
# She goes to check out its homepage
browser.get("http://localhost:8000")
# She notices the page title and header mention to-do lists
assert "To-Do" in browser.title
# She is invited to enter a to-do item straight away
# She types "Buy peacock feathers" into a text box
# (Edith's hobby is tying fly-fishing lures)
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list
# There is still a text box inviting her to add another item.
# She enters "Use peacock feathers to make a fly" (Edith is very methodical)
# The page updates again, and now shows both items on her list
# Satisfied, she goes back to sleep
browser.quit()
When I first started at PythonAnywhere, I used to virtuously pepper my code with nice descriptive comments. My colleagues said to me: ``Harry, we have a word for comments. We call them lies.'' I was shocked! I learned in school that comments are good practice?
They were exaggerating for effect. There is definitely a place for comments that add context and intention. But my colleagues were pointing out that comments aren’t always as useful as you hope. For starters, it’s pointless to write a comment that just repeats what you’re doing with the code:
# increment wibble by 1
wibble += 1
Not only is it pointless, but there’s a danger that you’ll forget to update the comments when you update the code, and they end up being misleading—lies! The ideal is to strive to make your code so readable, to use such good variable names and function names, and to structure it so well that you no longer need any comments to explain what the code is doing. Just a few here and there to explain why.
There are other places where comments are very useful. We’ll see that Django uses them a lot in the files it generates for us to use as a way of suggesting helpful bits of its API.
And, of course, we use comments to explain the User Story in our functional tests—by forcing us to make a coherent story out of the test, it makes sure we’re always testing from the point of view of the user.
There is more fun to be had in this area, things like Behaviour-Driven Development (see [appendix_bdd]) and building Domain-Specific Languages (DSLs) for testing, but they’re topics for other booksfootnote: Check out this video by the great Dave Farley if you want a taste: https://youtu.be/JDD5EEJgpHU?t=272 ].
For more on comments, I recommend John Ousterhoudt’s A Philosophy of Software Design, which you can get a taste of by reading his lecture notes from the chapter on comments.
You’ll notice that, apart from writing the test out as comments,
I’ve updated the assert
to look for the word To-Do'' instead of Django’s
Congratulations''.
That means we expect the test to fail now. Let’s try running it.
First, start up the server:
$ python manage.py runserver
And then, in another terminal, run the tests:
$ python functional_tests.py Traceback (most recent call last): File "...goat-book/functional_tests.py", line 10, in <module> assert "To-Do" in browser.title AssertionError
That’s what we call an 'expected fail', which is actually good news—not quite as good as a test that passes, but at least it’s failing for the right reason; we can have some confidence we’ve written the test correctly.
There are a couple of little annoyances we should probably deal with. Firstly, the message "AssertionError" isn’t very helpful—it would be nice if the test told us what it actually found as the browser title. Also, it’s left a Firefox window hanging around the desktop, so it would be nice if that got cleared up for us automatically.
One option would be to use the second parameter of the assert
keyword,
something like:
assert "To-Do" in browser.title, f"Browser title was {browser.title}"
And we could also use a try/finally
to clean up the old Firefox window.
But these sorts of problems are quite common in testing,
and there are some ready-made solutions for us
in the standard library’s unittest
module.
Let’s use that! In functional_tests.py:
import unittest
from selenium import webdriver
class NewVisitorTest(unittest.TestCase): # (1)
def setUp(self): # (3)
self.browser = webdriver.Firefox() #(4)
def tearDown(self): # (3)
self.browser.quit()
def test_can_start_a_todo_list(self): # (2)
# Edith has heard about a cool new online to-do app.
# She goes to check out its homepage
self.browser.get("http://localhost:8000") # (4)
# She notices the page title and header mention to-do lists
self.assertIn("To-Do", self.browser.title) # (5)
# She is invited to enter a to-do item straight away
self.fail("Finish the test!") # (6)
[...]
# Satisfied, she goes back to sleep
if __name__ == "__main__": # (7)
unittest.main() # (7)
You’ll probably notice a few things here:
-
Tests are organised into classes, which inherit from
unittest.TestCase
. -
The main body of the test is in a method called
test_can_start_a_todo_list
. Any method whose name starts withtest_
is a test method, and will be run by the test runner. You can have more than onetest_
method per class. Nice descriptive names for our test methods are a good idea too. -
setUp
andtearDown
are special methods which get run before and after each test. I’m using them to start and stop our browser. They’re a bit like atry/finally
, in thattearDown
will run even if there’s an error during the test itself.[1] No more Firefox windows left lying around! -
browser
, which was previously a global variable, becomesself.browser
, an attribute of the test class. This lets us pass it betweensetUp
,tearDown
, and the test method itself. -
We use
self.assertIn
instead of justassert
to make our test assertions.unittest
provides lots of helper functions like this to make test assertions, likeassertEqual
,assertTrue
,assertFalse
, and so on. You can find more in theunittest
documentation. -
self.fail
just fails no matter what, producing the error message given. I’m using it as a reminder to finish the test. -
Finally, we have the
if name == 'main'
clause (if you’ve not seen it before, that’s how a Python script checks if it’s been executed from the command line, rather than just imported by another script). We callunittest.main()
, which launches theunittest
test runner, which will automatically find test classes and methods in the file and run them.
Note
|
If you’ve read the Django testing documentation,
you might have seen something called LiveServerTestCase ,
and are wondering whether we should use it now.
Full points to you for reading the friendly manual!
LiveServerTestCase is a bit too complicated for now,
but I promise I’ll use it in a later chapter.
|
Let’s try out our new and improved FT![2]
$ python functional_tests.py F ====================================================================== FAIL: test_can_start_a_todo_list (__main__.NewVisitorTest.test_can_start_a_todo_list) --------------------------------------------------------------------- Traceback (most recent call last): File "...goat-book/functional_tests.py", line 18, in test_can_start_a_todo_list self.assertIn("To-Do", self.browser.title) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: 'To-Do' not found in 'The install worked successfully! Congratulations!' --------------------------------------------------------------------- Ran 1 test in 1.747s FAILED (failures=1)
That’s a bit nicer, isn’t it?
It tidied up our Firefox window,
it gives us a nicely formatted report of how many tests were run and how many failed,
and the assertIn
has given us a helpful error message with useful debugging info.
Bonzer!
Note
|
If you see some error messages saying ResourceWarning
about "unclosed files", it’s safe to ignore those.
They seem to come and go, every few selenium releases.
They don’t affect the important things to look for in
our tracebacks and test results.
|
The Python world is increasingly turning from the standard-library provided unittest
module
towards a third party tool called pytest
.
I’m a big fan too!
The Django project has a bunch of helpful tools designed to work with unittest. Although it is possible to get them to work with pytest, it felt like one thing too many to include in this book.
Read Brian Okken’s Python Testing with pytest for an excellent, comprehensive guide to Pytest instead.
This is a good point to do a commit; it’s a nicely self-contained change.
We’ve expanded our functional test
to include comments that describe the task we’re setting ourselves,
our minimum viable to-do list.
We’ve also rewritten it to use the Python unittest
module
and its various testing helper functions.
Do a git status
—that
should assure you that the only file that has changed is 'functional_tests.py'.
Then do a git diff -w
,
which shows you the difference between the last commit and what’s currently on disk,
with the -w
saying "ignore whitespace changes".
That should tell you that 'functional_tests.py' has changed quite substantially:
$ git diff -w diff --git a/functional_tests.py b/functional_tests.py index d333591..b0f22dc 100644 --- a/functional_tests.py + b/functional_tests.py @@ -1,15 +1,24 @@ +import unittest from selenium import webdriver -browser = webdriver.Firefox() +class NewVisitorTest(unittest.TestCase): + def setUp(self): + self.browser = webdriver.Firefox() + + def tearDown(self): + self.browser.quit() + + def test_can_start_a_todo_list(self): # Edith has heard about a cool new online to-do app. # She goes to check out its homepage -browser.get("http://localhost:8000") + self.browser.get("http://localhost:8000") # She notices the page title and header mention to-do lists -assert "To-Do" in browser.title + self.assertIn("To-Do", self.browser.title) # She is invited to enter a to-do item straight away + self.fail("Finish the test!") [...]
Now let’s do a:
$ git commit -a
The -a
means `automatically add any changes to tracked files''
(i.e., any files that we’ve committed before).
It won’t add any brand new files
(you have to explicitly `git add
them yourself),
but often, as in this case, there aren’t any new files,
so it’s a useful shortcut.
When the editor pops up, add a descriptive commit message, like ``First FT specced out in comments, and now uses unittest.''
Now that our FT uses a real test framework, and that we’ve got placeholder comments for what we want it to do, we’re in an excellent position to start writing some real code for our lists app. Read on!
- User Story
-
A description of how the application will work from the point of view of the user. Used to structure a functional test.
- Expected failure
-
When a test fails in the way that we expected it to.