Arrivals timeToStation

Hi all,

I’ve noticed that for repeated requests to{id}/Arrivals for the same id
the timeToStation value can slightly change on each request for a given line with a pattern like:

1st req: timeToStation 59
2nd req: timeToStation 62
3rd request: timeToStation 59
4th request: timeToStation 62
and so on with a spacing of secs between requests.

Can this pattern depends on the fact that the load balancer is redirecting the request to different servers which then reply with slightly different results ?

If so, would it be possible to set a cookie in the http request to route follow-up requests from the same client to the same server (something like sticky sessions) ?

I’ve also observed that the same alternating pattern happens on the whole prediction for a given line.
1st req: prediction for bus 321 is present
2dn req: prediction for bus 321 is absent
3rd req: prediction for bus 321 is present
4th req: prediction for bus 321 is absent

and so on.

Are others experiencing the same issue ?

Thank you

Welcome @nakkore

Could you cache the results on your own server? If you aren’t wanting new information (and there doesn’t seem to be any) then you can cache these results for, say, 60 seconds in a memcached?

This would also address the second issue?

Thank you @briantist caching could be a workaround but I was interested in understanding if there’s a server problem or the reasons why this is happening. Recently it seems to happen more frequently than in the past so I suspect there could be something which is not right worth an investigation.
On second thought caching may not even be a workaround as in case of the second issue you wouldn’t have the chance of seeing the “intermittent” bus for one long minute (or whatever T you decide).
I hope that @jamesevans can comment on this issue please.

@jamesevans any comments about the mentioned issue please ?

I can confirm that I see this.

I ran this query repeatedly:

Which returned this:

Running the same query again immediately afterwards returned this:

Note that the ‘read’ time predates the one in the previous query.

Repeating the query appeared to randomly return one or other of the above results causing the predicted arrival to flip between 67s and 105s.

Because the connection is encrypted I wasn’t able to verify whether the two sets of data were coming from different servers, but that’s possible.

If you want to handle it programmatically you could look at the ‘read’ time to see if a prediction is older than the most recent and, if so, ignore it.

Regarding your missing Bus issue. It’s possibly down to the same cause. However, I remember when I tried to implement my own countdown viewer for the buses passing my house I couldn’t rely on every bus appearing in every prediction message. Provided the timeToLive for the most recent prediction of any missing buses hadn’t expired then I would continue to use that prediction.

Hi, thank you for your feedback and suggestions. I’ll have a look at the read property which I had completely ignored. It would be nice that someone from TfL shed some light on the this weird behaviour though.

Yes I also noticed this a few months back and thought it was a load balancing thing.

unluckily none seems to care to give an explanation on what’s going on…


That’s just the way it is… TfL have very limited resources, so it’s up the the community of developers to support ourselves.

Public APIs are provided on an “as-is basis” to a community of experts. You are expected to work these problems out for yourself.

1 Like

well, I understand the limited resources but we’re all in this together. If all was open source the community could step in to solve issues in but unfortunately it’s not.
In this specific case we’re asking for explanations not fixes and I don’t think that an explanation requires such a great effort. An explanation would be helpful so that we can put in a place the best fix, if possible.
As a side note I still remember this blog Why a new website but no app? Part 1 – Digital Blog so I don’t get why, speaking about limited resources, TfL suddenly changed their mind.

1 Like


On the first point, it is important not to conflate Open Source and Open Data. They are two similar-sounding but unrelated concepts.

On the second part, I suspect that TfL has a new boss! The previous Mayor had a more commerce-first mindset and the current one has a more (let’s say) unionized-backing so the priorities have changed, which in a good democracy they should.

I say this because I went to an open-day and was personally warned off by a member of TfL staff. I was told that it’s no longer about computers it’s about employing people

I’m hoping as they continue to develop their app they’ll start to realise all the issues that exist in the API and start to finally fix them…

1 Like