Re: Free RealTime Data NOW Nest ODIN Trade Tiger Google Yahoo to AmiBroker, Fcharts M
It would be nice if we can do fast export of today's data using AFL from COM.
Right now, i will still stick with the manual backfill process as i need it only for two scrips and i dont want to overwrite all with 1 min bars.
Fast export will mean i can parse it and fill gaps only.
But anyway, i think you only need to fetch the bars within the last minute that is not included in VWAP. This should be fast enough as its max 59 bars per scrip. AFL may be better but quotes from com may also be ok. check.
You can do this just before calling AB import to minimize tick loss.
There will also be data loss for the last minute. If we run say at 09:30:30, VWAP statistics will not include 30 seconds data from 09:30.
Although i have not tested this, I think josh tool imports using one import call only. We dont expect import to take long so using multiple threads may be overkill anyway.
That will also need AB import to run multithreaded on same db. I dont know if it supports this.
Edit - Blocked AB during backfill import will not cause any tick loss for us. RTD fetch and AB calls are done in two different threads for this reason.
The only loss will be for any ticks written after backfill tools reads data but before calling AB.import().
This can be avoided by syncing the processes or by using same tool for both tasks.
We have data loss in ZT feed itself, so if we can avoid adding more than why not. If RTD and Backfill process is synchronised then there wont be any loss.
Anyway, for now i stick to plain backfill
Only Nest and NOW are supported which is what i need. Any tool with RTD server can be made to work.
No need. VWAP Backfill creates an ascii file in temporary folder. Better way is to invoke AFL to export current day data at 1min resolution and append it to the file created by VWAP Backfill. Then import it. We have to find way to invoke AFL from COM. I think there is one.
Right now, i will still stick with the manual backfill process as i need it only for two scrips and i dont want to overwrite all with 1 min bars.
Fast export will mean i can parse it and fill gaps only.
But anyway, i think you only need to fetch the bars within the last minute that is not included in VWAP. This should be fast enough as its max 59 bars per scrip. AFL may be better but quotes from com may also be ok. check.
You can do this just before calling AB import to minimize tick loss.
When tickmode backfill is done, AB locks the arrays till completion, hence the inevitable tickloss.
There are two ways of backfilling
One - Take on backfilling of each ticker sequentially. This is blistering fast and takes less than a second, resulting in minimal (nil for all practical purposes) tick loss per ticker.
Two - Take on simultaneous backfilling of all tickers by calling multiple asynchronous instances of AB COM object. However I don't know if NOW/ NEST allows multiple instances of the statistics window. Also the simultaneous backfill will actually cause a perceptible time increase (dependent upon the CPU cores and the L1 cache in your rig. The rig RAM has no role to play in this event). Therefore it is likely that the advantage of simultaneous backfill will be frittered away by tick loss for more time duration as ami will lock the arrays of the complete market watch until cessation of the operation.
As per my benchmarking, it'd take 3 - 4 secs (ballpark) for a database of 100 tickers.
So I would rather go with the first option with tomcat's solution implemented
One - Take on backfilling of each ticker sequentially. This is blistering fast and takes less than a second, resulting in minimal (nil for all practical purposes) tick loss per ticker.
Two - Take on simultaneous backfilling of all tickers by calling multiple asynchronous instances of AB COM object. However I don't know if NOW/ NEST allows multiple instances of the statistics window. Also the simultaneous backfill will actually cause a perceptible time increase (dependent upon the CPU cores and the L1 cache in your rig. The rig RAM has no role to play in this event). Therefore it is likely that the advantage of simultaneous backfill will be frittered away by tick loss for more time duration as ami will lock the arrays of the complete market watch until cessation of the operation.
As per my benchmarking, it'd take 3 - 4 secs (ballpark) for a database of 100 tickers.
So I would rather go with the first option with tomcat's solution implemented
That will also need AB import to run multithreaded on same db. I dont know if it supports this.
Edit - Blocked AB during backfill import will not cause any tick loss for us. RTD fetch and AB calls are done in two different threads for this reason.
The only loss will be for any ticks written after backfill tools reads data but before calling AB.import().
This can be avoided by syncing the processes or by using same tool for both tasks.
P.S - TB I can see that you are a purist and even a minimal tick loss is irksome hence the implementation of tomcat's solution.
Anyway, for now i stick to plain backfill
This utility will likely be the beginning of the end of the dubious 'fly by night' datafeed operators in the Indian context as it is faster than any of their offerings! The piece-de-resistance is that since data is pulled from the trading platform - all sections (commodities, cash, currencies etc) are 'on the tap'.
Amen
Amen
Last edited: