Raising the Bar for Data Management
Contact Us | Tel: +1 201 710 5977   

OneMarketData FAQs

How will OneTick lower my total cost of ownership for data management, analytics and CEP solutions?

OneMarketData's solutions lower the total cost of ownership of traditional data management, analytics and CEP platforms by eliminating costly systems integration, maintenance and licensing fees. With OneMarketData, all systems are seamlessly integrated so users do not need to waste resources and time trying to combine disparate systems.

Additionally, since OneMarketData offers one solution for data management, CEP and analytics, users do not need to maintain three different systems, eliminating IT costs associated with maintaining system compatibility, upgrades and more. And since OneMarketData's solutions are so easy to use, users do not need to rely on outside consultants to provide support or train users on the system. Instead, all support is handled in-house by highly trained data management experts who are available 24 hours a day, seven days a week.

I would like to learn more about OneMarketData's OneTick and/or OneQuantData solutions. Is a trial license available?

Yes! OneMarketData offers trials for all of its data management solutions. To learn more about our solutions and how to begin a product trial, please contact us at 1.201.710.5977 or at sales@onetick.com.

What hardware do I need for a OneTick installation?

OneTick can run on multiple operating systems including Linux, Windows, Solaris, Solaris on Intel, 32 or 64-bit. Regarding specific hardware requirements, it is important to note that each OneTick installation is unique based on the specific data volumes required by the user. Please note that CPU requirements can range from 4 CPU for historical data feeds to 32 CPU for OPRA and other data feeds with extensive volume.

What are your customer support hours?

OneMarketData provides 24-hour customer support for all its clients. OneMarketData customers can contact the support team directly via e-mail at support@onetick.com. Please note that additional arrangements are available through customer-specific service level agreements.

What is the license model for OneTick and OneTickCLOUD?

OneMarketData leases its software and data solutions with a degree of flexibility designed to align our fees to the benefit derived by our customers.

Does OneTick provide a secure data model with entitlements?

Yes. A security authorization and authentication scheme is available within the OneTick database system to grant or deny access to data and specific Event Processors such as those that write files /database data or allow constructing custom CODE functions. OneTick installation consists of multiple processes running on multiple hosts under one Unix/Linux ID (belonging to its own group) or MS Windows account depending on the platform. This ID is used to install software, run the modules and for general administration functions.

Authorization or login rights leverage the operating system ACL (access control list), which is configurable and largely dependent on the underlying OS, but includes NTLM, Kerberos and LDAP.

Authentication defines what logged-in (i.e. authorized) users have privileges or entitlements to access. The first level of entitlement is on databases which can be set to read, write or no access to groups of users. Entitlement can also be set on a range of database attributes including symbols and minimum and maximum data age.

Does OneTick Reference Data File require downtime for file uploads?

No, the OneTick Reference Data File empowers users to batch load the reference file automatically each day without any downtime. For more information on OneTick Reference Data File please visit the OneTick Reference Data File Web page.

What data items are included in OneTick Reference Data File? How does OneTick handle these data?

The OneTick Reference Data File is a flat file uploaded daily to provide users with critical updates to pertinent reference data for all asset classes. Reference data includes historical symbol changes and symbol continuity information, corporate actions (e.g., dividends, rights, splits, warrants, etc.) and symbology mapping.

The OneTick platform supports handling of reference data in two ways. Firstly, reference data can be loaded into OneTick databases in the same way as series data, making it expressly available for use by users during query building (CORP_ACTIONS event processor is one example). Secondly, OneTick supports special handling of certain specific types of reference data, which has an implicit effect on the behavior of the system. In other words, reference data loaded in this fashion is not expressly “visible” to the user, but is used by the system internally as a part of its processing. In this scenario, reference data is not loaded into OneTick via one of its normal data loading facilities, but rather through a specially formatted XML file. A few of the key examples are as follows:

  • Symbology reference data – enables the system to transparently handle changes in symbology. For example, if a symbol for a security changes for any reason, the system can be informed of the date and time of the change, and adjust internal processing of queries to transparently handle the change so that the users don’t have to expressly account for it in their queries. Another aspect of this is to handle cross symbology mapping. For example, IBM may be represented by ticker IBM in a database containing TAQ data but by IBM;US in a database containing data from Bloomberg. Additionally, customers may maintain their own internal symbology where IBM is referenced by their id 12345. By defining cross symbology mapping information, it is possible to query any database using any tickers with mapping information. In this example, the Bloomberg database could be queried using id Internal 12345 or Ticker IBM and the TAQ database could be queried using Internal 12345 or Bloomberg IBM;US
  • Calendar reference data – enables the system to automatically respect open and close times of exchanges, as well as holidays and half-days. The net result is that user queries which cross multiple days do not need to expressly filter out the times when the exchange is closed.
  • Corporate action reference data - enables the system to automatically adjust price and quote series according to stock splits. These can be represented as additive and multiplicative adjustments, necessary for both price and volume adjustments.
  • Rolling logic and continuous contract reference data - enables the system to correctly handle instruments with rolling/resetting symbology, such as futures.

The choice of which reference data mechanism to use (or whether to use both) depends on the particular data set and business need, and should be made on a project by project basis, although care should be taken to make sure that the approach is consistent across similar needs. When and if the special XML file mechanism is used, an additional decision needs to be made as to the preferred mode of maintaining this file. The options include manual maintenance, automatic generation from an upstream system such as a security/calendar master, as well as other possibilities. This decision has dependencies on other components of a firm’s data strategy (notably a security master), and therefore will be made at a later point in time, as the specific need arises. On a related note, OneMarketData offers a data product called OneQuantData that is a comprehensive repository of historical reference and pricing data designed specifically for the global equities market. It is preloaded in the OneTick accelerator database providing immediate accessibility for query and analysis using the graphical modeling front-end and custom applications.

What client APIs are available in OneTick and in what programming languages?

OneTick provides a client library with bindings across standard compiled languages, Java, C++ and C# and the ubiquitous scripting languages of PERL and Python. The API across all the languages provides a means to execute queries (stored in OTQ files), access data directly, and execute SQL statements using OneTick’s own dialect. As with all the query tools and mechanisms, the API operates on all the database storage subsystems (archives, in-memory, real-time) providing transparent access across them. The API is callback based, and result sets are self-describing and so it is possible to use exact same client code for historical queries as for continuous / CEP queries. In addition to query execution, the OneTick API can be used to extend the OneTick analytical language and to design custom versions of real-time data collectors, historical or in memory database loaders or CEP adapters.

Can I incorporate my own analytical functions? What about those from MATLAB and R?

While OneTick provides a large in-built library and integrates to well-known mathematical libraries, there may be occasions where incorporating custom user-defined functions is desirable. These additional functions can be developed in Java, C++ PERL or Python depending on data project requirements. This is accomplished with a feature known as the UDF (User-Defined Functions). UDF’s can be implemented in a couple of different ways a) as standalone DLLs (shared libs) that are loaded by the analytical engine and executed just as any of the in-built functions b) via a special purpose Event Processor called the CODE EP. This EP provides a means to directly enter custom code in any of the previously mentioned languages. The CODE EP has its own language editor or can link to an external one. For compiled languages (i.e. Java, C++), a make-style build occurs the first time the query is executed (or any time the code is modified). Behind the scenes, the CODE EP will automatically produce a DLL (shared lib) for the analytical engine to load. For script-style languages (i.e. PERL), the appropriate runtime needs to be accessible. Only entitled power users gain access to the CODE Event Processor.

Architecturally, UDF’s would fit into one of the categories described under in-built functions. Depending on their purpose and what output tick stream (if any) they’re designed to produce, UDF’s can be structured as aggregators, data filters or more general purpose.

In addition to these options, OneTick has several integrated analytic solutions:

R

The open source R language has become increasingly popular in quantitative finance. This product has been integrated into the GUI query builder and analytic engine. There is an Event Processor for R which provides a means to enter R language syntax and make reference to tick data. The R runtime DLL (shared lib) is directly loaded by the tick server’s analytical engine. Logically for queries, R appears as just another EP consuming a tick stream and producing another. Thus queries can make use of the built-in functions (EPs) and R seamlessly.

Logically for queries, R appears as just another EP consuming a tick stream and producing another. Thus queries can make use of the built-in functions (EPs) and R seamlessly.

Another integration scheme is provided for those preferring to work in R’s own tools. R provides a means to source data through ODBC and SQL logic (known as the RODBC package). OneTick provide an ODBC driver and its own dialect of SQL to both access database data and execute OTQ files (akin to relational stored procedures).

MATLAB

As with R, there is support within OneTick for Matlab on both the client and server. OneTick client libraries ship with a Matlab Extension (MEX) interface. This interface allows construction and execution of OneTick queries, with results delivered back into Matlab. Also, there exists the capability to use the Matlab compiler to compile Matlab code into OneTick custom functions (see 2.6.4 below) that then make up part of the standard Event Processor feature set in OneTick. This custom function would then ingest ticks from source OneTick graph nodes, and produce output ticks that can be consumed by sink graph nodes.

What analytics tools are available with OneTick?

The analytics tools available with OneTick Database and CEP Engine can be divided into several groups:

Stored Queries

OneTick Queries ( OTQs ) are created using OneTick Graphical Display Tool, or simply the GUI as it’s known. Once created the OTQ file can be either saved in private directory or shared among the users if placed in shared directory. Shared queries can then be referenced using a full path, or by name residing in a path (i.e. OTQ_PATH) as configured in the client configuration file.

OTQ files can contain one or more named queries. They can be independent, even unrelated queries but the more common usage is to construct queries in a modular fashion. Queries can contain other (sub) queries. Sub queries may hold a set of routine functions (e.g. join trades and quotes for a specific exchange and filter out bad ticks) that can be passed user-specified parameters. A more structured approach to query design is possible as a result.

Built-In Functions

The OneTick database system provides a large collection of built-in analytical functions. These functions referred to as Event Processors (EPs) are semantically assembled in a query (or sub-query) and ultimately define the logical, time series result set of a query. Event Processors are grouped by type of function they perform, the input tick stream they consume and the output tick stream they produce. Aggregators for example group an input tick stream producing aggregate values based on a bucketing scheme. Bucketing which is a means to accumulate and tally the input ticks and produce an output stream at regular or varying (sliding or user defined) intervals which can be based on time, tick-count or additional grouping and other conditions. Common aggregators are Moving Averages (AVERAGE), Highest value in the bucket (HIGH), Lowest value in the bucket (LOW). There are also specialized aggregators that uniquely operate on Order Book tick types.

Other types of Event Processors include filters, transformers, joins & unions, statistical and numerous finance specific functions, sorting and ranking and more. Input and output Event Processors provide the ability to read additional timeseries from files or external ODBC sources or output data to external components. Together these functions allow time series tick streams originating from any of the OneTick storage sources (archive, in-memory or real time) to be filtered, reduced and/or enriched.

The real value comes from the ability to combine the EP’s in creative ways that ultimately make up the semantic logic of a complete query producing meaningful results. The use cases could be any of the following for modeling and optimization of trading decisions and quantitative analysis:

  • Analyzing composite or merged Order Books from multiple sources
  • Currency adjusted Order Book analysis
  • Price & Volume analytics – fastest moving, average bid-ask spread, volume patterns, and more
  • Analyzing the relationship of trades to the prevailing quote(s) – ranking, Lee and Ready, cancellations, etc.
  • Reconstituted market replay for backtesting
  • Linear Regression, Auto-Correlation, Historical Volatility
  • Real-time index calculations (S&P500) for Index Arbitrage
  • Mean Reversion, Moving Average, and Statistical Arbitrage for trade signal generation
  • Trade data interpolation
  • Portfolio Analytics and Sharpe Ratios
  • Historical Value at Risk calculations
  • Option Chaining, Intrinsic Value, Implied Volatility
  • Implementation shortfall, slippage and other Transaction Cost Analysis (TCA)techniques
  • Detection of optimal Option Spreads (Straddle, Calendar, Butterfly, etc.)
  • Take-profit/Stop-loss measurements
  • News-driven market analysis
  • Order-driven market data analysis
  • Modeling of differing continuous contract methodologies and their effect on returns
  • Data flagging/filtering based on any combination of user defined criteria

Integrated Functions

R

The open source R language has become increasingly popular in quantitative finance. This product has been integrated into the GUI query builder and analytic engine. There is an Event Processor for R which provides a means to enter R language syntax and make reference to tick data. The R runtime DLL (shared lib) is directly loaded by the tick server’s analytical engine. Logically for queries, R appears as just another EP consuming a tick stream and producing another. Thus queries can make use of the built-in functions (EPs) and R seamlessly.

Another integration scheme is provided for those preferring to work in R’s own tools. R provides a means to source data through ODBC and SQL logic (known as the RODBC package). OneTick provide an ODBC driver and its own dialect of SQL to both access database data and execute OTQ files (akin to relational stored procedures).

MATLAB

As with R, there is support within OneTick for Matlab on both the client and server. OneTick client libraries ship with a Matlab Extension (MEX) interface. This interface allows construction and execution of OneTick queries, with results delivered back into Matlab. Also, there exists the capability to use the Matlab compiler to compile Matlab code into OneTick custom functions (see 2.6.4 below) that then make up part of the standard Event Processor feature set in OneTick. This custom function would then ingest ticks from source OneTick graph nodes, and produce output ticks that can be consumed by sink graph nodes.

Custom Functions

While OneTick provides a large in-built library and integrates to well-known mathematical libraries, there may be occasions where incorporating custom user-defined functions is desirable. These additional functions can be developed in Java, C++ PERL or Python depending on data project requirements. This is accomplished with a feature known as the UDF (User-Defined Functions). UDF’s can be implemented in a couple of different ways a) as standalone DLLs (shared libs) that are loaded by the analytical engine and executed just as any of the in-built functions b) via a special purpose Event Processor called the CODE EP. This EP provides a means to directly enter custom code in any of the previously mentioned languages. The CODE EP has its own language editor or can link to an external one. For compiled languages (i.e. Java, C++), a make-style build occurs the first time the query is executed (or any time the code is modified). Behind the scenes, the CODE EP will automatically produce a DLL (shared lib) for the analytical engine to load. For script-style languages (i.e. PERL), the appropriate runtime needs to be accessible. Only entitled power users gain access to the CODE Event Processor.

Architecturally, UDF’s would fit into one of the categories described under in-built functions. Depending on their purpose and what output tick stream (if any) they’re designed to produce, UDF’s can be structured as aggregators, data filters or more general purpose.

Can OneTick incorporate reference data?

Yes. The OneTick platform supports handling of reference data in two ways. Firstly, reference data can be loaded into OneTick databases in the same way as series data, making it expressly available for use by users during query building (CORP_ACTIONS event processor is one example). Secondly, OneTick supports special handling of certain specific types of reference data, which has an implicit effect on the behavior of the system. In other words, reference data loaded in this fashion is not expressly “visible” to the user, but is used by the system internally as a part of its processing. In this scenario, reference data is not loaded into OneTick via one of its normal data loading facilities, but rather through a specially formatted XML file. A few of the key examples are as follow:

  • Symbology reference data – enables the system to transparently handle changes in symbology. For example, if a symbol for a security changes for any reason, the system can be informed of the date and time of the change, and adjust internal processing of queries to transparently handle the change so that the users don’t have to expressly account for it in their queries. Another aspect of this is to handle cross symbology mapping. For example, IBM may be represented by ticker IBM in a database containing TAQ data but by IBM;US in a database containing data from Bloomberg. Additionally, customers may maintain their own internal symbology where IBM is referenced by their id 12345. By defining cross symbology mapping information, it is possible to query any database using any tickers with mapping information. In this example, the Bloomberg database could be queried using id Internal 12345 or Ticker IBM and the TAQ database could be queried using Internal 12345 or Bloomberg IBM;US
  • Calendar reference data – enables the system to automatically respect open and close times of exchanges, as well as holidays and half-days. The net result is that user queries which cross multiple days do not need to expressly filter out the times when the exchange is closed.
  • Corporate action reference data – enables the system to automatically adjust price and quote series according to stock splits. These can be represented as additive and multiplicative adjustments, necessary for both price and volume adjustments.
  • Rolling logic and continuous contract reference data – enables the system to correctly handle instruments with rolling/resetting symbology, such as futures.

The choice of which reference data mechanism to use (or whether to use both) depends on the particular data set and business need, and should be made on a project by project basis, although care should be taken to make sure that the approach is consistent across similar needs. When and if the special XML file mechanism is used, an additional decision needs to be made as to the preferred mode of maintaining this file. The options include manual maintenance, automatic generation from an upstream system such as a security/calendar master, as well as other possibilities. This decision has dependencies on other components of a firm’s data strategy (notably a security master), and therefore will be made at a later point in time, as the specific need arises. On a related note, OneMarketData offers a data product called OneQuantData that is a comprehensive repository of historical reference and pricing data designed specifically for the global equities market. It is preloaded in the OneTick accelerator database providing immediate accessibility for query and analysis using the graphical modeling front-end and custom applications.

Does OneTick have any limitations regarding data processing or data volumes?

There are no strict limits on the volume of data, though at extremely high volumes processes will begin to slow down. OneTick counteracts these effects through its compression systems.

Each database file whether representing a daily standard archive or an accelerator database is compressed. The in-memory database is also compressed. There are multiple compression types that can be specified. Supported types are NATIVE and NATIVE_PLUS_GZIP. NATIVE typically results in archives between 3-5 times smaller than source data. NATIVE_PLUS_GZIP results in 2-3 times further compression but may result in slower data retrieval, typically up to about 30% depending on the data being compressed. It has the benefit of significant reduction in space usage, and this does not have a significant impact on the very complex queries where data retrieval accounts for a smaller percentage of overall data processing. As more work is performed in the query, aggregations/filters etc., the overhead comes down significantly as a percentage of overall query execution time.

While compression may not seem like a data modeling decision, financial data very often occupies terabytes of disk storage and gigabytes of memory. This of course varies considerably by the subscription universe (i.e. the total number of symbols collected) and the asset class (e.g. the daily data volume for Options quotes is much larger than Currencies). Nonetheless, to efficiently manage all storage subsystem types OneTick provides not only a means to compress data, but also tuning mechanisms. The granularity of compression starts at the individual field (i.e. price) where space savings can be enormous for (near) repeating values. It is also possible to create compression groups that combine multiple fields. This tuning characteristic has the added benefit of improving compression and performance. The need for this attention to (field-level) detail becomes apparent when you have field values that are inherently uncompressible (i.e. sequence numbers, timestamps). Disabling compression for these field values prevents wasted processing. Achieving optimally tuned compression does ultimately require a bit of experimentation given a representative data sample.

Is there a latency penalty incurred by using a graphical modeling tool for query design?

Many vendors provide a graphical modeling tool for the construction of queries. These tools do have the same objective in mind, to shorten the time from idea to deployment. But not all vendors’ tools are created equal; some incur a penalty for that abbreviated development cycle. This has to do with the graphical tool itself and its underlying technology. Many take the graphical models designed by humans and produce machine-generated source code, which by-definition can never be as efficient or as optimal as code produced by the creative mind of a human being. Furthermore, in many of those same tools a semantic mismatch is at play between the graphical modeling paradigm and the underlying language, often causing graphical expressions to translate poorly at run time. OneTick’s GUI is not a code generator; the saved OTQ is natively executed by either the Tick Server or CEP engine depending on its use of historical and/or real-time data. Its graph of EP’s maps one-to-one to the API that OneTick exposes, there is no intermediary step or semantic mismatch caused by code-generation. The library of Event Processors, all written in C++ is highly optimized for execution against all OneTick storage systems (archive, in-memory, real-time). Any function within a node on a graph represents a class, and each link between nodes is a source-sink relationship between objects of those classes. As a result of this architectural design, queries constructed graphically suffer no latency disadvantage compared to directly programming to OneTick’s public API in a standard language.

How is OneTick CEP different from traditional CEP engines?

To ensure strategies will perform in today's markets, users need the ability to test strategies on historical and real-time data. That is why many analysts turn to Complex Event Processing (CEP) engines, which enable them to filter, correlate and aggregate real-time event data in a low latency environment. When combined with a historical database and data management solution, CEP engines help users test strategies on historical and real-time data - empowering them to determine whether their strategies will perform as predicted once deployed.

Yet developing an integrated system for capturing and analyzing historical and real-time data can be costly and time-consuming as most database and CEP solutions are not built with each other in mind. These disparate systems often require expensive integration projects, which rarely produce a seamless data management solution.

That is why OneMarketData has developed OneTick CEP, a data management solution that combines the robust analytics of OneTick with the capability to run real-time streaming market data. Built from inception as a data management system, analytical research platform and CEP solution, OneTick CEP is the first data management solution that seamlessly integrates historical data with real-time complex event processing.

With OneTick CEP users can:

  • Eliminate duplicate coding for backtesting and trading strategies
  • Lower the total cost of ownership of clients’ data management and CEP solutions
  • Run strategies that leverage real-time and historical market data
  • Slash strategy time to production through an integrated data management system
  • Work with market data experts that understand your business

What is the programmatic design model of OneTick?

OneTick’s primary programmatic model is a visual one. An application’s semantic logic is the assembly of functional nodes in a directed graph. OneTick provides a graphical modeling tool for the purpose of creating, debugging and also running that logic. OneTick’s graphical modeling tool focuses on the design and construction of algorithms and analytical queries. That empowers the creativity of the user by providing a means to visually model complex logic with the goal of shortening the time from idea to deployment.

By providing a large library of functions query logic can filter, enrich, aggregate and transform tick data to reveal meaningful analysis. Query logic represents algorithms for trade analytics, statistical arbitrage, trade performance analysis and numerous other use cases. Once assembled, the algorithms can be back-tested using a range of historical data and simulation capabilities to ensure robustness and profitability.

MovingCrossOTQ

Can OneTick compress data in the database?

Yes. Each database file whether representing a daily standard archive or an accelerator database is compressed. The in-memory database is also compressed. There are multiple compression types that can be specified. Supported types are NATIVE and NATIVE_PLUS_GZIP. NATIVE typically results in archives between 3-5 times smaller than source data. NATIVE_PLUS_GZIP results in 2-3 times further compression but may result in slower data retrieval, typically up to about 30% depending on the data being compressed. It has the benefit of significant reduction in space usage, and this does not have a significant impact on the very complex queries where data retrieval accounts for a smaller percentage of overall data processing. As more work is performed in the query, aggregations/filters etc., the overhead comes down significantly as a percentage of overall query execution time.

Can I incorporate data from other sources within OneTick?

Yes. OneTick’s built-in Event Processors provide the ability to read additional time series from files or external ODBC sources, or output data to external components. These functions allow time series tick streams originating from any of the OneTick storage sources (archive, in-memory or real time) to be filtered, reduced and/or enriched.

Can I store my own derived data in OneTick?

Yes. Derived data databases can be used to achieve higher performance of queries over very large high frequency data sets. OneTick includes several solutions for storage of derived data:

  • A OneTick Query (OTQ) loader process can query an existing database and store query results to a database on regular basis. This would need to be set-up and managed by OneTick administrators
  • A special Event Processor can write the results of a query back to a database to which the user has write access. This is under user’s control.

The typical usage of derived data queries is for bar data over long time periods. For example, if there is frequent query usage of weekly, monthly or yearly bar data a means to optimize those queries would be to roll-up tick-by-tick data as weekly, monthly, yearly and store in their own unique time series. There is a specific Event Processor (EP) called WRITE_TO_ONETICK_DB that can be used for this purpose. It would be the final EP in a query that calculates the bar data.

What is an Order Book? How does OneTick manage Order Book data?

Market depth refers to a group of passive orders whose tradable volume that cannot be executed immediately by an Exchange’s matching engine. An order book is an organized collection of that market depth. Entries in the book represent the price and quantity of passive bid (buy) and offer (sell) orders sorted by price.

A OneTick time series, as a consumer of order book data, receives the raw content provided by exchanges as an initial snapshot, followed by a sequence of incremental changes. Those changes affect the book by modifying, removing or adding entries. Exchanges provide this content at various levels of detail, for example some provide the specific order identifier (ORDER_ID) as a unique key which pinpoints the book entry to effect a change.

In OneTick, market depth is represented as a series of price level modifications, or PRLs. Every modification is represented by a tick, which implicitly modifies the book. OneTick supports a number of book modifiers (ticks). These are for incremental updates (a change to an entry’s price & size), deletions (removal of an entry), full updates (a snapshot of the whole book) and group updates (an atomic change to multiple entries with the same timestamp).

This architecture creates a flexible model for reconstructing books from history and consolidating books from multiple sources. For example, a full book snapshot is created each day for each archived security. User queries can request order book data starting from any time of day for any level of depth. Reconstruction is fast and efficient. And this same architecture also applies to book construction in real-time CEP.

A key aspect of order book analysis is accurate and fast reconstruction across any slice of time within a day, the previous week or longer timeframes traversing multiple Exchanges, ECNs and Alternative Trading Systems (ATS’s). Analyzing order book liquidity can provide insight into the market’s dynamics when you can reconstruct that liquidity aggregated across the whole market a single venue or any combination of sources.

OneTick’s Event Processors, the built-in function library (section 2.6.2) is also amenable order book processing and includes a selection unique to order book analytics

How do I load data into OneTick?

At a high level, in addition to the core database engine itself, the OneTick software includes numerous other functional components, including those dedicated to data capture. The data capture components (also known as Collectors and feed handlers) are capable of real-time capture of high frequency market data from sources such as RMDS (Reuters Market Data System), as well as having provisions for traditional, batch oriented loads. These components also include support for specialized logic that allows for normalization (a data transformation process), corporate actions adjustments, symbology changes, and other processing that is typically required for financial markets data where historical data is a key component.

In addition to these components, OneTick has several value-added partnerships with companies providing historical data. Visit our partner page, for the latest list of vendors and their solutions.

What types of data can OneTick handle?

OneTick is optimized for the capture, storage, retrieval and analysis of extremely high-frequency (tick) data. It is built to specifically handle data in the financial markets domain, such as instrument prices, trades, quotes, order book information, etc. Additionally, it can be used to store other types of data, such as closing prices, orders & executions, fundamental data and news feeds; OneTick has pre-defined constructs to support these financial market-specific concepts. The OneTick database server (also referred to as the Tick Server) is a proprietary, non-relational engine, which includes an in-memory database and a file-based archive store.

How is OneTick different from other data management systems?

OneTick is a time series tick database, a software system that is optimized for the handling of data organized by time. Time series are finite or infinite sequences of data items, where each item has an associated timestamp and the sequence of timestamps is non-decreasing. OneTick is such a database management system specifically for storing financial tick data such as trades and quotes. It also incorporates the ability to apply trade cancels, corrections and corporate actions such as splits and dividends. These are unique features to OneTick specifically designed for the financial industry.

What is time-series data? What are financial time-series?

Time series are finite or infinite sequences of data items, where each item has an associated timestamp. The time resolution of some systems such as financial data sources can be quite low (milliseconds, microseconds or even nanoseconds). Elements of a time series are called ticks. Time series are also called (data) streams. The term 'stream' is often associated with infinite sequences (as in 'stream-oriented computation', i.e. computation that does not assume that the end of the data is accessible and occurs as the data arrives) but is used interchangeably with 'time series'.

What are the core technologies within OneTick?

OneTick is a specialized time-series tick database optimized for capture, storage, retrieval and analysis of extremely high-frequency (tick) data. It is built to specifically handle data in the financial markets domain, such as instrument prices, trades, quotes, order book information, etc.

OneTick is also a complex event processing (CEP) for the analysis of real-time market data. These are the foundational technologies within OneTick which are folded together into a single solution. This allows users to efficiently incorporate history and real-time data analysis. OneTick does not distinguish or require differing programming models in the use of real-time vs. historic data. They are viewed as a single time continuum, what happened yesterday, last week or last month is simply as extension of what is occurring today and what may occur in the future.

In the Media

Ross Dubin ATMonitor VideoRoss Dubin talks to ATMonitor about OneTick's Solutions

Twitter Feed

OneMarketData Highest number of trades within 1 second on 2/24: $BAC with 784 trades between 9:30:09.556 and 9:30:10.555
2hreplyretweetfavorite
OneMarketData Longest consecutive sequence of trades with increasing prices on 2/23 was 9 $CHTR trades that started at 20:41:20.005
23hreplyretweetfavorite

Address

   Baker Waterfront Plaza 2
   Hudson Place, 6th Floor
   Hoboken, NJ 07030
Tel:+1 201 710 5977

London:
   OneMarketData LLC
   Holland House
   4 Bury Street
   London EC3A 5AW
   UK
Tel: +44 (0)7956 370 340

Tokyo:
Tel: +81 (90) 6510-6673

About Us

OneMarketData is a leading provider of software and data for the financial industry. Our flagship product, OneTick is a comprehensive suite for time-series data management and real-time analytical event processing. Proprietary traders, hedge funds and investment banks can leverage the built-in capabilities of OneTick for quantitative research, transaction cost analysis, surveillance and back-testing.   Built by Wall Street experts, the OneTick suite offers enterprise technology to address the most demanding requirements…

Read More...