An easy step by step tutorial about how to write a Bid Ask Time Serie in Matlab SDK.

Artesian gives you straightforward access to the data history to perform an analysis and produce a plan in the most compatible way.

Let’s see step-by-step how to proceed.

The goal

Write data in a Bid Ask Time Series Market Data.

The reference data is fictitious, created exclusively for this case. With Artesian, it is possible to write any data attributable to a Time Series, making it suitable for saving your data production.

Let’s see how to proceed step by step.

Import of Artesian libraries and configuration

To use all the features of Artesian, we must first authenticate. We need to install the Artesian toolbox, which is required to instantiate the authentication towards Artesian (line 1 of the script) and read the data.

Once the toolbox is installed, we can configure Artesian by entering the essential link and the API key.

To extract these two essential data, refer to the reference tutorial “How to Configure Artesian Matlab SDK “.

Once the Artesian configuration is complete, we can configure the Query Service (line 3)

				
					cfg = ArtesianServiceConfig("https://arkive.artesian.cloud/{tenantName}/", "{api-key}");
mds = MarketDataService(cfg);
				
			


The Market Data Identifier and the data required to write the Bid Ask Time Series

Once Artesian and the Market Data Service have been configured, we can define the MarketData Identifier; that is, we can give a name to our MarketData.

In this case, the Provider’s name will be “MatlabSDK”, while the name of the Market Data will be “BidAskWrite”. The definition of these two fields is necessary for two reasons:

  1.  The Provider and Market Data’s names represent the unique identifier of our curve on Artesian. The value combination is then translated into the MarketDataID.
  2. The Provider and Market Data’s names are necessary to find the data within the portal through the free text or category filter.
 

Once the market data and provider names are defined, we can decide on the essential characteristics of our Time Series, such as the type of granularity, the type of the Time Series and the Time Zone.

Artesian can support different granularities such as: 10min, 15min, 30min, Hour, Day, Week, Month, Quarter, Season and Year.

When we decide the type of granularity of our market data: we must write it accordingly, indicating the values. For example, in the case of Granularity Day, the data will correspond to a specific day of a certain month in a particular year. In the case of Granularity Hour, the data will correspond to a specific hour (minute and second) of a certain day in a particular month and year.

The TimeZones: must be enhanced with the one corresponding to the data we are saving; this will help the system to apply the necessary conversions to the data in the case of extractions in a TimeZone different from the original.

The Type of the Time Series: in this case is BidAsk, but it could also be Actual, Versioned, MarketAssessment or Auction. See the other tutorials.

The Aggregation Rule: in Artesian, the Aggregation Rule is the operation when data is extracted in a different granularity from the original one. You can choose whether to set it to “Undefined”, “SumAndDivide”, or “AverageAndReplicate”. The data we are saving are market assessments, a change in their granularity would force a rievaluation of the assessment values, taking in account the raw data, which is not supported by Artesian at the moment. It is therefore mandatory to pass the value “Undefined” when composing the writing of this data type.

				
					data = MarketDataEntityInput("MatlabSDK",...
       "BidAskWrite",...
       "Day",...
       "CET",...
       AggregationRuleEnum.Undefined,...
       MarketDataTypeEnum.BidAsk...
     );
    mds.MarketData.Creat(data);

				
			


Writing the MarketData values

The last part of our code consists of the configuration of our write to Artesian.

The required parameters for this step are:

The MarketData identifier: that we defined at the beginning of our code

The reference TimeZone of the data we are writing: this must be “UTC” in the case of data with hourly or lower granularity (with adequate data conversion if necessary). It must correspond to the Original Timezone in the case of daily granularity data or higher. This data conversion in the case of hourly or lower granularity is necessary for Artesian to correctly manage the data sent (e.g. change of Winter/Summer time)

The BidAsk rows are a nested dictionary. The key to the first level of the dictionary is the date and time of the request for the quotation we want to write. Values are another (second level) dictionary consisting of “product” and “bid ask value”.

Writing at least one value in the dictionaries is mandatory to save the bid ask on Artesian.

Among the values to be entered, and suggested by IntelliSense “, there are:” BestBidPrice “, ” BestAskPrice “, ” BestBidQuantity“, “BestAskQuantity “, ” LastPrice ” and ” LastQuantity “. In this case, we will only consider bestBidPrice and lastQuantity.

In the code example, we report the writinf og daily data, with values for Janhuary 1st. Another mandatory field to write is the  “downloadedAt“, a metadata information that represents when the data was generated.

Once the previous steps are complete, we can load the Auction Time Series into the system using the command “UpsertCurve.Upsert()“.

				
					rows = [];
rows = [rows {{"2022-01-01T00:00:00"{{"Jan-22 {{"bestBidPrice" 60}, {"bestBidQuantity" 50}}}}}}];

id = MarketDataIdentifier("MatlabSDK","BidAskWrite");
data = UpsertCurveDataBidAsk(id, "CET", "2022-01-01T00:00:00Z", rows);
mds.UpsertCurve.Upsert(data);
				
			


Visualization of the new MarketData on the Artesian portal

Unless there are errors to report, nothing will appear in the terminal. However, returning to the Artesian portal, we can verify that our TimeSeries appears under the ProviderName category with the previously given name “MatlabSDK”.

It is sufficient to employ just once and then have it entirely reproducible and automated in our workflow.

Not only does it save you time, but it allows you to minimize human errors caused by repeated operations on substantial amounts of data or different Excel files.

An undeniable advantage that allows us to focus on data analysis instead of its management and optimization.