Bus working timetables - Groundhog Day all over again

Depressingly, the issue that was dogging this process a few months ago has reared its ugly head again in the 23rd December upload.

For the following routes some or all of the current fiels have been overwrittten with obsolete material:-

3 11 18 (sSa) 52 (though most of the latest day schedules had not been loaded anyway) 73 (sSa) 140 148 186 296 (SSa) 343 (sMT) 345 533 697 698 H10 N3 N8 (sFrNt sSaNt) N11 N140 N381 UL7 UL8 UL21

The 140, for example, has started running to Heathrow again.

I think that current files for 52 (most of day service) H9 and W11n are t=yet to be loaded as well.

Good news, good news. All the above “overwritings” have been corrected.

Bad news, bad news. It’s been done for routes 92 266 406 N266 and W9 instead. The 266 is back to Hammersmith.

52, 330 (part) and H9 have never had the current WTTs loaded.

I am aware that my ramblings on WTTs and spider maps are not exactly to do with reuse of open data but the causes are presumably either technical or to do with incorrect use of the technology. Direct contacts might help?

92 is OK again but ther 391 is now on the naughty step. Clearly someone has got the hump with the recent West London changes and decided that the 266 really ought still to run to Hammersmith and the 391 to Sands End.

The schedules have disappeared completely this morning. Not in the Data Bucket listing and using the normal site just gets a response that no files have been found.

Server issue?

Still no files available. The initial page for the system at

is there, just entering any route number gets the response that no schedules have been found. Certainly no indication that the absence of files is deliberate.

1 Like

@mjcarchive

I can get a response for the 308

Brian - Thanks. I just did the same search and got nothing

so if they were there several hours ago some tidy person appears to have arrnaged for their deletionn again.

Hi @mjcarchive @briantist

We are working on restoring the schedules & fixing the root cause of the issue, which we have now identified.

The schedules were uploaded manually so we could diagnose the issue and found that the sync task permissions were not correct.

We have implemented a fix and are arranging a new upload with the data owners to ensure that the fix that we have performed is stable.

We’ll let you know once that is complete.

Thanks,
James

Thanks, James. Is that a fix for the schedules disappearing after being loaded and for the issue of old files overwriting newer, or just one of them?

I was not referring to you as “tidy person” BTW, despite your Bluebirds affiliation!

haha - I’m not actually working on this one myself, but my colleague is tidy too. I also read that as Bluebirds affliction, which would also be accurate…

The data owners specify the files that are for public use and as long as their upload is correct we should process them into the S3 bucket which powers that web page.

There was an issue that they had before where they were overwriting the production files with test data.

Once we have fully fixed the sync to the S3 bucket, that should hopefully be the end of it!

Thanks,
James

James

This evening there are two separate sets of WTTs in the Data Bucket, though the second set of links (with “pub” included in the path) do not work.

So, taking the set that does work, mostly carrying the date of 13th January, how does it look?

Not tidy. Not lush. I am afraid the only word I can use is “grim”. Outdated schedules have crept in for an alarmingly large number of routes. List (1) below is where all or most of the current WTTs have been overwritten. List (2) shows those where some (but not mpst) curretn WTTs have been overwritten. List (3) is similar but the wrong version for the right SCN is now present.

I am struggling as to how this can come about and quite how overwriting with test data can be the cause but if that is what you are being told I suppose it must be right.

I can identify dodgy files very quickly after downloading with relatively unsophisticated techniques. The only slightly clever thing I have to do is access the title property (which includes the Service Change Number) of each file and check that the title does not match anything that had previously been loaded. I can see that a check of that nature is not much good if the publication process is not faithfully picking up a new and correctly created set of files.

Oh, and there has been no improvement for the routes for which WTTs are missing altogether.

(1) All (or at any rate most) schedules incorrectly overwritten.

You will note that some of these are serial offenders.

17
25U
92
96
105
140
186
216
224
225
238U
280
308
394
412
440
453
533
697
698
E6
H10
H32
H98
K1
W16
N8
N109
N307
N453
RB1
RB2
RB4
RB5
RB6
RB6A
UL7
UL8
UL16
UL79
UL80

(2) one or two schedules incorrectly overwritten

3 (Ce)
12 (sMT)
14 (sMT)
47 (sFr sMT)
183 (sTh)
192 (MF)
229 (MFSc)
281 (sMF)
343 (sMT)
349 (Fr MT)
N12 (sMTNt)
N14 (sSuNt)
N18 (sMTNt)
N36 (sMTNt sMFNt sSuNt)
N37 (sMTNt)
N148 (sSaNt)
N155 (sSaNt)
N281 (sSaNt)
N285 (sSaNt)
N343 (sMTNt)

(3) Wrong version has overwritten later version (for same SCN)

8 (sSa)
15 (sSa)
18 (sSa)
88 (sSa)
113 (Ce)
345 (Ce)
N14 (sMTNt)
N15 (sMTNt)
N18 (sSaNt)
N26 (sSaNt)
N35 (sMTNt)
N44 (sMTNt)
N47 (sFrNt sMTNt)
N57 (sMTNt)
N72 (sSaNt)
N105 (sSaNt)
N213 (sMTNt)
N296 (sFrNt)
N365 (sSuNt)

Michael

1 Like

Also, the small number of genuinely new files loaded actually date back to late 2019. Nothing from last weekend’s changes has been loaded.

And half the new files loaded (for N52 345 N345) are older versions replacing newer ones (with the same SCN).

Losing the will to live here.

Now found that last night’s update had no (or almost no) files for routes 8 109 218 228 278 306 418 C1 W4 and X140.

I think most, if not all, these had service revisions around 7th December and quite a few of the errors noted last night seemed to involve overwriting timetables from around that time.

I see there was another upload this morning. C1 and W4 seem to be back. I have neither the time nor inclination to analyse this further set until there is some indication that I won’t just be wanting more time by doing so.

Michael

1 Like

hi @mjcarchive

We are still working on the issue and we’ll let you know when we believe it is resolved.

We’ll use the examples above to test our resolution once implemented.

Thanks,
James

1 Like

James

I have looked particularly carefully at this week’s upload. It is a lot better than last week’s but there are still plenty of errors. I have summarised theses in an Excel file at

http://www.timetablegraveyard.co.uk/Anomaly_summary.xlsx

I hope it is reasonably self-explanatory and of some use.

Looking back over a longer period, it seems that some routes see this problem frequently and others not at all. What seems to happen too often is that a new set of files is loaded then a few weeks (or even one week) later the old files pop up again, then this is corrected, then they reappear. I think I found one example where three separate sets of files were taking turns. The only explanation I could think of was that there was no means of ensuring that there was only one file marked current for each route/daytype combination. Without that, it becomes almost inevitable that you get doubles or worse (or sometimes nothing when there should be something?).

I’m keen to do anything I can to contribute to solving and correcting this but it is difficult to do that when I don’t really know what the file creation system is doing, or at least meant to be doing. It must be able to ignore files marked as expired (as each file in the huge dump of XML files in late 2018 was marked), mustn’t it?

Michael

1 Like

This is just going round in circles and if there has been any progress towards a solution in the last six months it is devilishly hard to see it.

Most of last week’s errors have been corrected but even more new errors have replaced them. WTTs such as the 165 which have only recently been loaded have been overwritten again. In the name of the Great Anorak in the Sky, how on earth can this happen?

Updated file at http://www.timetablegraveyard.co.uk/Anomaly_summary.xlsx.

This file now contains two tabs, the current one (27 Jan) and the previous one (20 Jan) updated to show whether or not the errors were corrected this week.

Michael

Hello Michael

We uploaded these last night - are you able to check if there are any further errors?

Thanks

Neaman

Hi Neaman

Sorry, should have ,made clearer that the 27 Jan tab is based on the set of files loaded at about 6pm yesterday (27th). I have just checked that nothing further was uploaded later.

Just for background, I have to download the lot each time (as they are all newly created each week) but once I have done so I can check for (presumed wrongly) resuscitated files very quickly by using the Title property of each PDF file. I look for files (or file Titles) which were not there the previous week but had been there before that.

Michael

I have now looked at today’s update (Monday 3rd). The state of play can be found at http://www.timetablegraveyard.co.uk/Anomaly_summary.xlsx as usual.

It is a typical week, I am afraid. Some but not all of the extant errors corrected. Some of last week;s corrections overwrittten with errors again (e g route 92) and some completely new errors (e g route 29).

This is getting to be something of a ritual dance. The files get updated, I come up with a list of errors, sometimes a very long one, which might or might not get acknowledged and then nothing happens till the next upload, which overall is no better.

I’m putting in several hours’ work here to help you guys identify the underlying problems but I get no information back. This has been going on for at least a year and it is hard to have confidence that anyone (supposing anyone is actually looking at all) is close to getting a handle on what is going on.

Michael