data Archives

February 11, 2007

Feelin' the Flow (NYC Traffic Data, and more)

Who knew that there was such a wealth of statistical data published about traffic flows in NYC? You can't yet get the count for every corner, but if you're interested in people coming in and out of Manhattan, you're in luck.

The first place to look is the The New York Metropolitan Transportation Council (NYMTC) Data and Model page. Most detailed is probably the Hub Bround Report which goes into ridiculous detail about, unsurprisingly, people entering the Hub (Manhattan below 60th street). On the average business day in 2003, how many people entered Manhattan on the N train between 9 and 10AM? 10,031 of course. On the L train? 21,336 (which, if you recall from this chart, is about a third more than a 7 lane freeway).

The NYC Department of Transportation Publications Page is also a good source -- New York City Bridge Traffic Volumes, Bicyclist Fatalities and Serious Injuries, and more.

Gee, all this would be really useful if you were, for example, trying to build a rough model of the effects of a London-like Congestion Pricing scheme for the NYC core...

November 6, 2008

You Know What I Did Last Summer?

I spent 10 weeks last Summer as an intern on the strategy team of Transport for London's (TfL) London Rail division. This part of TfL is responsible for the London Overground, the Docklands Light Railway, and Tramlink, is the presumptive operator of Crossrail (if and when...), and serves as TfL's interface with the National Rail network. My general task was to help London Rail start to make use of the oceans of data spewing out of the Oyster smartcard ticketing system, but I spent the bulk of my time working on a project that came to be titled Oyster-Based Performance Metrics for the London Overground. I've posted my final report and slides and outline for the presentation I gave to TfL executive management.

Rather than try to explain the work, I've just cut and pasted the executive summary from the report and included some of my favorite figures (with no explanation). It's not a terrible paraphrasing, but if there is a lot of really good meat in the document if you are bored and hungry. Snooze on...

The London Overground is a pre-existing rail service in London whose operating responsibility and revenue risk were recently granted to Transport for London (TfL). Here we discuss the prospect of using data from the Oyster smartcard ticketing system to evaluate the performance of the London Overground explicitly from a passenger’s perspective.

The core idea behind our approach is to directly measure end-to-end individual journey times by taking the difference between entry and exit transactions stored by the Oyster system. The focus of this study is Excess Journey Time (EJT), calculated on a trip-by-trip basis as the difference between the observed journey time and some standard. In this case, the standard is determined for each trip with reference to published timetables, indicating how long the trip should have taken under right-time operations. A positive EJT indicates that the journey took longer than was expected.

Excess Journey Time is interpreted as the delay experienced by passengers as a result of services not running precisely to schedule. The distribution of EJT indicates reliability. We validate these interpretations using a detailed graphical analysis, and then aggregate them to the line and network level over a variety of time periods. Our analysis is conducted on large samples of Oyster data covering several months and millions of Overground trips in 2007.

At the aggregate level, relative values of Excess Journey Time are largely in line with expectations. The North London Line has the highest average Excess Journey Time of all lines on the London Overground, around 3 minutes, and the widest distributions (i.e. least passenger reliability). On all lines, there is significant day-to-day variability of Excess Journey Time. For the whole London Overground, and for the North London Line in particular, Excess Journey Time is worst in the AM and PM Peak timebands.

The current performance regime for the London Overground is the Public Performance Measure (PPM), which measures the fraction of scheduled vehicle trips arriving at their destinations fewer than five minutes late. Over time, EJT shows a strong correlation to PPM. There is clear additional variation in EJT, indicating that it captures certain information about passenger experiences that PPM does not. This variation tends to increase as PPM decreases, particularly in the AM and PM peak timebands, which suggests that the effectiveness of PPM as a measure of the passenger experience decreases as service deteriorates.

Another quantity of interest derivable from Oyster data is the time between passenger arrival at the station and the scheduled departure of the following train. The spread of this distribution of this quantity indicates the degree to which passengers arrive randomly (i.e. "turn up and go") rather than time their arrivals according to schedules. We have found that on the North London Line, especially during the AM, interpeak, and PM peak periods, passengers tend to arrive randomly. This is apparently in contrast to conventional wisdom for National Rail services, and has distinct implications for crowding levels and timetabling practice. In an appendix to this report we look at this in detail, and recommend that even headways be prioritized in timetabling the North London Line.

The Overground is, by design, part of a larger integrated multimodal network. Oyster data, by nature, is somewhat ambiguous in representing passenger trips on such a network that involve transfers or multiple routing options. This poses certain problems to our methodology, but also presents the opportunity to quantify and understand the experience of passengers across the entire network. We discuss these problems, potential solutions, and opportunities at length, as well as other applications for this methodology, and future research directions.

We have concluded that Oyster-based metrics are effective for monitoring and identifying problems as experienced by passengers on the London Overground. They may be even more effective for use across the whole of London's public transport network, particularly as Oyster is in the process of being rolled out to all National Rail services in the Greater London Area.

November 27, 2008

What's in a Schedule?

I owe somebody what amounts to this blog post. Pardon the lack of illustrative diagrams.

I have been thinking about mass transit trip planning software for the web and for mobile devices. Between the individual efforts of agencies around the world, and Google's efforts towards open sharing of structured transit system data, we seem to be on the right track, institutionally speaking. As a user, however, I am perpetually frustrated by the focus that every transit trip planner I have ever used puts on the supposed schedule, even for services that are high frequency and/or less-than-perfectly reliable.

This general feeling, combined with two recent and exciting meetings I have had, leave me with a few nagging questions:

  • In providing transit users with such software, how useful is the schedule by which the transit provider has planned their operations?
  • When are expected waiting and travel times more useful than precise trip-by-trip itineraries?
  • What effect do randomness and unreliability have on those expectations?
  • Should the passenger plan her trip differently if she has to be on time than if her schedule is flexible?
  • Finally, does real-time information obviate the need for any or all of these other inputs?

The answer: it depends. The actual schedule (R trains leave Union St at 8:13, 8:25, 8:37 arriving at Union Square at 8:39, 8:51, 9:03, etc) is only relevant to the degree to which operations follow the plan. And even in the face of near-perfect operations, I only care about the schedule of departures when I have something to lose by ignoring it (i.e. when there's not always another train or bus in tolerably few minutes).

Expectations implied by the schedule (I should wait 6 minutes on average, but never longer than 12, and the ride is expected to take 26 minutes) are meaningful even when the precise schedule isn't, but only if those expectations are reasonable. For example, a simple model shows that as the service becomes even slightly variable, expectation of waiting time increases, as does the maximum. Of course, many things that cause some passengers to wait longer are experienced by other passengers as delays along the way.

Let's now think specifically about trip planning software for relatively high frequency urban transit services with normal amounts of variability. I don't want to be bothered with exact but fairly useless times of scheduled departures and arrivals. I just want to know how long I can realistically expect to have to wait, and how long the trip is likely to take. And when I have a hard timeline, like getting to a meeting or a catching an airplane, I want to know the (approximately) worst case scenario.

Current levels of unreliability in our transit systems are not something we should have to live with. More funding, saner public policy, and better management can go a long way towards fixing some problems. I am not focusing here on the sources of unreliability, but suffice it to say they are many, some debatably the provider's responsibility (eg missing drivers, faulty equipment) and some debatably not (eg on-street traffic, passenger behavior). But given that they are here today, would you rather think a trip will be fast and have it end up being slow, or would you prefer to have the best information possible when making your own decisions?

The copious amounts of real service data collected by transit providers from bus GPS and rail signaling systems are of great value here. They allow us to fairly easily and cheaply describe distributions of waiting and travel times, and thus estimate expectations and approximate maximums for use in trip planning software.

Often, those systems were in fact installed to provide real time data, with historical performance analysis a secondary or accidental purpose. The notion of an expected waiting time changes radically when real-time "next-vehicle" information is provided, assuming the real-time predictions are in fact accurate. However, even perfect real-time data doesn't protect from problems from occurring down the line or reduce the variability inherently introduced by successive transfers.

In the next generation of (open source?) web and mobile transit trip planning, please:

  • Give me the option to use the schedule or to use expected values, but try to be smart about the default.
  • When not using the schedule, please allow me to plan depending on how flexible my own schedule is.
  • Use real performance data to generate realistic expected and worst case scenarios.
  • When possible, especially when the trip is imminent, use real time data to reduce uncertainty in my trip plan, but make use of realistic expectations for forecasting the balance of the trip.

To implement such a trip planner, a number of open questions remain:

  • Even for a perfectly reliable system, where exactly is that threshold between using the schedule and using expectations?
  • How does this threshold change as a function of normal or excessive variability in operations?
  • What is the best way to integrate real-time data (of varying predicative quality) with realistic expectations for trip planning on-the-go?

If you're still awake, and have comments or questions, let's talk. The fact that this post found its way onto your computer makes it highly likely you already know how to get in touch.

About data

This page contains an archive of all entries posted to Frumination in the data category. They are listed from oldest to newest.

clothes is the previous category.

fresh is the next category.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.