A blog about mapping, Python, GIS, JavaScript, transit, open data, planning, &c.

## Microaccessibility with OpenTripPlanner

. . .

Analysis of accessibility is generally undertaken in large regions, such as metropolitan areas or entire countries. Frequently it also uses macro temporal scales, as in before-and-after analysis. This analysis instead looks at micro scales, both spatial and temporal. The study area is the University of California, Santa Barbara campus and the adjoining student community of Isla Vista.

I analyzed accessibility at every hour of a typical week, so that accessibility can be compared at different times of day and on different days. This has been done before, looking at accessibility at different times of day (page 8) in the Los Angeles area. I used tighter temporal scales (one hour instead of four chunks) and also analyzed accessibility over the entire week to allow the discernment of weekly cycles.

Only accessibility to eateries was analyzed. Data were obtained from OpenStreetMap for network data and from the UCSB Interactive Campus Map for data on eatery locations. Animations of accessibility over a typical week follow; in the darker blue areas more eateries are accessibile within five minutes' travel time. Five minutes was chosen as the cutoff because it is half of the walking time between the intersection of Pardall and Embarcadero Del Norte and the front of the University Center, two areas where many eateries are concentrated. A more systematic study would need to estimate this from travel data. Acessibility was analyzed for both walking and cycling.

The two animated maps show the accessibility to eateries at different times of day by different modes. The bicycle map shows much more accessibility because with a bicycle one can reach many more opportunities in 5 minutes' time. A daily cycle can easily be determined, with most (but not all) businesses closing in the late evening and opening again in the morning, creating a pulsing accessibility. The eateries on campus (the eastern portion of the maps) do not have the same span of service as the eateries in Isla Vista. On the weekends, most of the campus eateries are not open at all.

There are a few limitations. OpenTripPlanner’s cycling mode currently does not support bicycle parking; at UCSB, there are many bicycle parking areas where one must park before going to one’s building. At a micro scale of analysis, correctness of the network is also very important because small absolute errors can be large relative to the total length of the trip; OpenStreetMap data was improved for this project but is still not perfect, especially given construction on campus.

Further research would use behavioral data to better estimate parameters for the accessibility measure, as well as to interpret the results. Sara Matthews analyzed mode choice in trips to Humboldt State University in the context of residential location. Accessibility could be used as a independent variable in a similar analysis of mode choice.

Even in the context of comprehensive transportation models such as SimAGENT (Southern California Association of Governments) and SF-CHAMP (San Francisco County Transportation Authority), accessibility measures rendered as maps such as these are valuable. They are understandable and thus can easily be presented to non-technical decisionmakers and to the public. They also generally have more of a descriptive rather than projective role; that is, they describe current situations rather than predicting future ones. Finally, they can play a role in individual decision support; Jarrett Walker has noted the usefulness of isochrones for decision support, and these accessibility measures can play the same role. Walk Score® has recently announced understandable accessibility maps; this makes these types of measures much more available.

For a more in-depth treatment, see the full report.

Special thanks to Dr. Konstadinos Goulias and Jae Lee in the GeoTrans lab at UCSB, and to Bryan Karaffa in the UCSB Department of Geography. Map data © OpenStreetMap contributors. Eatery data © UCSB Interactive Campus Map. These maps and analyses are the result of a research project and should not be used for decision support without additional consultation.

## Jane Jacobs and Global Cities Presentation at the California Geographical Society

. . .

I gave a presentation on the connections between Jane Jacobs and Global Cities Theory at the California Geographical Society 2013 conference. The slides from the presentation are on the Publications page.

## Organizing my Research

. . .

As a follow-up to my recent post about organizing my library, this post talks about the system I’ve come up with for organizing my research.

I was starting a new research project, and I realized that writing my bibliography and managing my citations manually wasn’t going to be good enough. I needed a reference manager of some sort. My librarian suggested I try Mendeley, and it has become the core of my reference-management workflow.

Whenever I read a new work academically, I put it into Mendeley first and then I use the notes field in Mendeley to keep notes on the work. I use the Mendeley Desktop client for almost all my interactions with Mendeley; it’s available for Linux, Windows and Mac OS X, so it should work for most users. I haven’t used the PDF annotation feature much, but when I have I’ve found it pretty cool.

I split my references up into folders for each project, to better organize them.

I write my papers using LaTeX and manage bibliographies with BibTeX-format files. In the Mendeley Options dialog, I have enabled automatic BibTeX syncing, creating one BibTeX file per collection. I save these files to ~/texmf/bibtex/bib, which is a global location for BibTeX files. I can then say \bibliography{collectionName} in any LaTeX file on my system and have it automatically import the citations from that Mendeley collection. Then I can use \autocite, \printbibliography and any other commands one would usually use to manage citations in LaTeX. One caveat is that your collection names cannot contain spaces; BibTeX doesn’t support that.

Beyond that, when Mendeley syncs the BibTeX files, it also syncs the notes I’ve put in the notes field of each entry. This is really cool. I can then use a LaTeX file like this to generate a PDF annotated bibliography (in MLA format) of a particular collection.

For web pages, I have Zotero installed inside Firefox, and I import pages into that. I have Mendeley configured to automatically import citations from Zotero.

Finally, I’ve put Scholarley on my phone and set it to sync my Mendeley library so that I can look at my citations on the go. Unfortunately, I can’t add works or take notes through the client (so no research on the bus), but I hope that will come out soon.

## Organizing My Library

. . .

My personal library of books has been growing rather quickly of late, and a few months ago I realized I need to organize it. I decided to organize my books using the Library of Congress classification system. That is the system in use by most academic libraries in the United States, and it was familiar to me. I could have used the Dewey Decimal System as well. Library of Congress does have the advantage of covering both fiction and non-fiction. I initially learned how to do this using this post which goes more in-depth about the physical organization of items.

I also thought that an electronic catalog, just like the public libraries have, was in order. After looking around at different pieces of software, I settled on a free private account from LibraryThing. One can organize up to 200 books for free, and the paid plans are very reasonable. Be aware that the default account setup lists your books on your public profile—-if you want a private account, you must go to Edit Profile –> Account Settings and change your account to private.

I first put all of my books into the catalog using the straightforward ‘Add books’ tool:

As you can see, I typed in an ISBN (10-digit, in this case) and performed a search. LibraryThing found the details of the book in the Library of Congress catalog (one can also use many other catalogs if the item one wants is not in the LOC catalog). Clicking on the result causes the item to be added to your library. You can specify the collections before you start.

I don’t have a barcode scanner, but I am told that one can use a barcode scanner to input ISBN’s more quickly.

ISBN’s can be found on the back cover of most modern books. On older books, the ISBN can often be found on the copyright page:

Even older books may not have ISBN’s at all. However, many books have a Library of Congress Catalog Number printed on the copyright page. Entering this number (with the hyphen) into the search box and searching the Library of Congress should work.

I did have a few books that had neither Library of Congress Catalog Numbers nor ISBN’s. For these books, I tried a search by author or by title. If I didn’t find it in the Library of Congress, I would set LibraryThing to search Amazon instead. There were a few cases where I knew of a library that held the item, so I clicked on the ‘All 700 available sources’ link and chose that particular library. One could also perform a search in WorldCat and then set LibraryThing to search in a library that owns the item. As a last resort, one can enter a book manually, although I never did that.

I also separated books into two LibraryThing “collections” for the two physical locations where I keep books. When one is adding books, one can select what collection to add them to.

Once I had put all of my books into LibraryThing, I went back through the stack of books I had just entered. I set the LibraryThing search results page to display the call number for the books by clicking on the gear next to the search style selector (the A B C D E buttons at the top of the search results page). For each book, I found its record in my LibraryThing catalog. I then wrote the call number on a removable sticky note and affixed it to the spine. For the more fragile books, I used the tops of post-its cut to an appropriate size. For the most fragile books, I did not apply a call number at all; they can be found based on the books around them.

Most of the books had call numbers in the catalog already, thanks to the import from the Library of Congress catalog. However, some were not and had to be tracked down.

Many books have a Library of Congress cataloging-in-publication block on the copyright page which lists the call number (although I would suspect most of those books to be in the Library of Congress catalog). This is the first place to check:

I performed a search in WorldCat for the title and then followed the links to the holding page for the item in an academic library that uses LOC classification. If I couldn’t find exactly my version, I would use the call number anyhow but note in LibraryThing that it was non-authoritative. Failing that, I would just drill down through the Wikipedia page on the Library of Congress system, find what seemed the most appropriate number, and use that, marking it as non-authoritative. If there was a range specified, I would just use the beginning of the range. In at least one case, all I could come up with was the section letter, so I will file that book at the start of that section. While this might not be advisable in a library, it will work for me—-the point is that I can find the books later on.

The last step is to shelve all the books in order by their call numbers.

Now I can search my catalog and see exactly where to find each book:

## A Simple Model of Automobile Travel Time

. . .

For some personal research I’m working on, I’m using OpenTripPlanner for automobile routing. I’ve already applied speed limits to the routing algorithm, but that’s only part of accurately modeling automobile travel time. While for a cyclist or a pedestrian, the amount of time spent actually moving at full speed may be the lion’s share of their journey, for an automobile this is not true. Especially in city traffic, it’s likely that a large part of the time is spent waiting at intersections, accelerating, or moving through congestion (or not moving through congestion).

Also, automobiles have large turn costs in some cases; turning left at a busy intersection (in a country where driving occurs on the right) may have a large cost associated with it. In contrast, a pedestrian can choose the side of the street he or she wishes to walk on. Even a cyclist has some flexibility at a large intersection; he or she can dismount and choose to cross as a pedestrian in two straight lines rather than making a left turn.

Finally, the widely variable environments in which automobiles can operate create different problems in routing. The biggest benefit of freeways is that they have no intersections; in many cases, the difference in speed limits between a 65 mph freeway and a 45 mph arterial street is insignificant because the freeway route is longer.

I propose this as a general, very simple model for travel time, appropriate in a setting where the only data we have is the physical attributes of the streets and intersections (from OpenStreetMap). However, it is also flexible enough to apply if there were more data.

$$t_{total} = t_{distance} + t_{intersection} + \Delta t_{acceleration} + \Delta t_{deceleration} + \Delta t_{traffic}$$

$$t_{distance}$$ is just the lowest possible travel time based on the roads being traveled upon; for instance, if we travel 25 miles on a road with a speed limit of 25 mph , it will take at least an hour, without counting any other source of delays (assuming we obey the speed limit, which a trip planner must do). This is the simplest variable to calculate. Though it would probably not be a part of the initial model in OTP, this would be the place to figure in characteristics of the road itself. For example, many windy mountain roads (at least in California) have a posted maximum speed of 45 mph or 55 mph , but in actual practice there are many places where a motorist must slow significantly below this figure for safe driving.

$$t_{intersection}$$ is the time spent stopped at intersections, waiting for the light to change or waiting for traffic to clear. It can be estimated based on the intersection properties (whether there is a traffic light present, &c.) and probabilities; for instance, if there is a 35% probability that a motorist will be stopped by a traffic light on an arterial road, and if they are stopped the stop will average 30 seconds, $$t_{intersection} = (.35)(30)$$.

$$\Delta t_{acceleration}$$ and $$\Delta t_{deceleration}$$ are the additional times that come from accelerating and decelerating to an intersection. For instance, if a car must decelerate to 0 mph at an intersection and the road leading into and out of the intersection has a speed limit of 45 mph , $$\Delta t_{deceleration}$$ would be the time it takes to decelerate from 45 mph to 0 mph minus the time it takes to travel an equivalent distance at full speed, to avoid double-counting.

Finally, $$\Delta t_{traffic}$$ is delay from traffic. Calculation of this is highly data-dependent and could be implemented in OTP at a later date.

Now that I’ve layed out a generic model, I’ll mention how it could be implemented in OTP. In OTP, $$t_{distance}$$ would be calculated by the edges in the graph when they are traversed, based on their max speed. All of the other variables would be calculated in in the intersection vertices, based on the speeds of the incoming and outgoing segments (to calculate acceleration; though the vertices do the calculation, the calculations are requested by the edges after the vertices, once the full traversal pattern is known), the speed at which the intersection can be traversed (which is estimated based on incoming and outgoing speeds and turns, and may be 0 in the case of an expected stop), and the amount of delay present, based on whether there is a traffic light present and what kind of road it is in; for instance, the intersections between freeways and their ramps have 0 delay because they aren’t really “intersections” in the usual sense of that word.