1,277
edits
Changes
note why we're not doing travel-time estimates
The system, as currently envisioned, will be a web-based portal which will allow users to evaluate the quality of various data feeds through any modern, standards-compliant browser with an internet connection to CCIT servers. The interface will be primarily visual, allowing users to compare a number of metrics (described in more detail below) visually as well as numerically. At the current time, a comparison of at least one and at most two data sources will be possible, though the system will be designed in such a way that the latter restriction will not be permanent and could be lifted in the future.
In the spirit of modern and user-friendly web design paradigms, the system should be responsive and visually appealing. Exporting data generated graphs for sharing via email or other means should be easy and not painful. The tool should be useful "as is" or "out of the box" but still allow a useful amount of customization to allow users to tailor it to their specific needs or application.
More details on the user interface and program layout will be specified in a separate document. The system will be targeted primarily at CCIT researchers familiar with the different data feeds and models which are available. However, the user interface will be designed with less technical users in mind, so that (for example) it can be demoed live to transportation professionals. Note that in general, the CCIT system contains two types of data feeds: travel time distributions (such as the FasTrack travel time system) link-based and point-based speed/flow/density estimates. In the short term, the system will be designed with the latter group of feeds in mind. However, it should be architected in such a way that the addition of travel time link-based feeds should be possible without much additional effort. ===Distinction between DAT and FeedGenerator===Note that the system described here can be called the "Data Analysis Tool," or DAT for short. Its only purpose is to ''accept any number of feeds as inputs and provide an interface to compare the data that these feeds contain directly''. Its purpose is '''not''' to take two vastly different feeds and generate the necessary modifications to make them compatible with each other for comparison. Such functionality is beyond the scope of the work for the DAT component specifically, even though it will most likely be necessary for DAT itself to work properly. As a result, a required component of the data quality assessment ''framework'' (but not DAT itself) is the ''FeedGenerator'' module. This component is responsible for creating the necessary output filters to the feeds which already exist in the system so that different feeds can be compared correctly in the DAT. The FeedGenerator module will be responsible for making sure that consistent metadata exists across different feeds to enable comparisons to take place. The minimal metadata requirements for a feed to be usable by DAT can be broken up into feed-level and datapoint-level requirements.;Feed-level requirements*For processed feeds**Feed processing sequence – a description of the modifications that have been made to the input data to arrive at the data currently produced by the feed**Inputs used for feed**Model parameters**Typical input-output error profile**Generic feed type (model-based, statistical, historical, real-time, or some combination of these)*For raw feeds**Sensor type – if the feed contains readings from a single sensor type:Additionally, raw feeds may specify characteristics of the sensor networks, if applicable. This, however, is not required.;Datapoint-level requirements*Recorded time*Received time*Sensor ID*Sensor location*Sensor type [for multi-sensor feeds]*GPS device error [if applicable]*Measured value (speed/count/etc)Users who want to compare two incompatible feeds will be provided with an interface to the FeedGenerator so that they can submit a request to the system administrator to create an output filter for the feed(s) in question so that a proper output filter will be created. A direct "instant filter creation" mechanism will not be present in the system, at least initially, due to the management complexities that would be introduced by such a system.
==Evaluation metrics==
The data quality assessment tool should provide an easy-to-use interface to specify "correct" or benchmark values for each of the metrics, as the tolerable amount of, for example, GPS error depends on the specific application for which the data is being considered. Also, any feed available to the system should be usable as a benchmark for any metric (as long as it has the data to calculate it).
All metrics, whether direct or calculated, will generally be generated on-the-fly for each request. As such, the system does not require the use of a database for storing calculated data. If system usage is high enough that database load or performance become a problem, we should look into using [http://www.memcached.org memcached] for storing the calculation results. Since the results are transient, storing them permanently in a database does not make much sense.
The system will not allow users to flag specific, problematic data points or feeds initially due to the added overhead of having to store this data in a manageable and useful fashion. If users want to, in effect, create a new output filter based on their analyses, the tool will provide a textual summary of the user's current filtering algorithm so that a corresponding output filter may be created by the system administrator.
===Data-level metrics===
*Distribution of GPS errors (as reported by the recording device)
*Distribution of map-matching errors (as determined by MM mapmatching algorithms)
*Data transmission delay (time difference between data recording and data storage on server; 2-step delay for TeleNav only: device→TeleNav server→CCIT server)
*Sampling rate †*Space coverage †*Time coverage †*Penetration rate †*Distribution of point location distance from link end (for city locations with traffic lightsmeasured values–––––<br/>† – at this time, provides ability to flag when most it is not entirely clear if this data points are not will be generated on -the-fly by the DAT or by the FeedGenerator (see above for the ends, since people distinction). This separation should be waiting at lights)become evident during implementation.
All of these metrics should be filterable by time intervals, feed, device model (if available), location (specified as a network, polygon, or set of specific links), and unique device. This would allow for the analysis of derived metrics such as "density of data per unique device" or "distribution of point location from link end for the city of SF."
===Application-level metricscomparison===These metrics This functionality will allow the users to see directly compare the differences between output of some model when different combinations of input feeds on are used with it. At this time, the application levelcomparison between outputs will be purely visual, though specific metrics or analytical/numerical comparison methodologies can be added at a later time. The system will be designed as follows: for a chosen model (i.e. when they are used as input to some the highway model), rather than on the raw data aloneuser will be able specify which input feeds should be used and for what time period the model will be run. The addition user will provide a contact email address as well. When the user submits the request, their task will be placed into a queue of new metrics model runs. When the model finishes computation, the user will be sent an email containing a link at this level is more constrainedwhich they may view the model's results. These results will be viewable to other users as well. As a result, as it requires that if the user wants to compare the application being benchmarked is (a) capable output of accepting the highway model on two different sets of input feeds as inputs, he may either:*Find two existing output sets and compare their outputs*Take an existing output set, request the generation of a second output set, and (b) then compare the two sets once generation of the second set is complete*Make two requests for the generation of two new output sets and then compare them once the computation of both is easily instrumented to calculate completeSince this will allow for the metrics easy generation of an arbitrary number of output sets, these generated output sets will be automatically deleted after a set number of interestdays, initially 7. Two main metrics are envisioned for ==Future directions==For the future, the system at this point:should be able to accommodate these additional feed types.