Ziplin: Discovering Available Bundles
Ziplin: Discovering Available Bundles
Ziplin: Discovering Available Bundles
http://www.zipline.io/bundles.html#writing-a-new-bundle
(we’ll use this to import market data sets from IDX exchange and others)
Data Bundles
A data bundle is a collection of pricing data, adjustment data, and an asset database.
Bundles allow us to preload all of the data we will need to run backtests and store the
data for future runs.
$ zipline bundles
my-custom-bundle 2016-05-05 20:35:19.809398
my-custom-bundle 2016-05-05 20:34:53.654082
my-custom-bundle 2016-05-05 20:34:48.401767
quandl <no ingestions>
quantopian-quandl 2016-05-05 20:06:40.894956
quantopian-quandl (provided by zipline)
The dates and times next to the name show the times when the data for this bundle was
ingested. We have run three different ingestions for my-custom-bundle. We have never
ingested any data for the quandl bundle so it just shows <no ingestions> instead. Finally,
there is only one ingestion forquantopian-quandl.
Ingesting Data
The first step to using a data bundle is to ingest the data. The ingestion process will
invoke some custom bundle command and then write the data to a standard location
that zipline can find. By default the location where ingested data will be written
is $ZIPLINE_ROOT/data/<bundle> where by default ZIPLINE_ROOT=~/.zipline. The ingestion step
may take some time as it could involve downloading and processing a lot of data. This
can be run with:
Old Data
When the ingest command is used it will write the new data to a subdirectory
of$ZIPLINE_ROOT/data/<bundle> which is named with the current date. This makes it possible
to look at older data or even run backtests with the older copies. Running a backtest
with an old ingestion makes it easier to reproduce backtest results later.
One drawback of saving all of the data by default is that the data directory may grow
quite large even if you do not want to use the data. As shown earlier, we can list all of
the ingestions with the bundles command. To solve the problem of leaking old data
there is another command: clean, which will clear data bundles based on some time
constraints.
For example:
# keep everything in the range of [before, after] and delete the rest
$ zipline clean [-b <bundle>] --before <date> --after <after>
ingestion that is less than or equal to the bundle-date. This is how we can run backtests
with older data. The reason that-bundle-date uses a less than or equal to relationship is
that we can specify the date that we ran an old backtest and get the same data that
would have been available to us on that date. Thebundle-date defaults to the current day
to use the most recent data.
By default zipline comes with the quandl data bundle which uses quandl’s WIKI dataset.
The quandl data bundle includes daily pricing data, splits, cash dividends, and asset
metadata. To ingest thequandl data bundle we recommend creating an account on
quandl.com to get an API key to be able to make more API requests per day. Once we
have an API key we may run:
though we may still run ingest as an anonymous quandl user (with no API key). We may
also set theQUANDL_DOWNLOAD_ATTEMPTS environment variable to an integer which is the
number of attempts that should be made to download data from quandls servers. By
default QUANDL_DOWNLOAD_ATTEMPTS will be 5, meaning that we will retry each attempt 5 times.
Note
QUANDL_DOWNLOAD_ATTEMPTS is
not the total number of allowed failures, just the number of
allowed failures per request. The quandl loader will make one request per 100 equities
for the metadata followed by one request per equity.
Quantopian provides a mirror of the quandl WIKI dataset with the data in the formats
that zipline expects. This is available under the name: quantopian-quandl and is the default
bundle for zipline.
More than one yahoo equities bundle may be registered as long as they use different
names.
The ingest function is responsible for loading the data into memory and passing it to a
set of writer objects provided by zipline to convert the data to zipline’s internal format.
The ingest function may work by downloading data from a remote location like
the quandl bundle or yahoo bundles or it may just load files that are already on the
machine. The function is provided with writers that will write the data to the correct
location transactionally. If an ingestion fails part way through the bundle will not be
written in an incomplete state.
ingest(environ,
asset_db_writer,
minute_bar_writer,
daily_bar_writer,
adjustment_writer,
calendar,
start_session,
end_session,
cache,
show_progress,
output_dir)
environ
environ isa mapping representing the environment variables to use. This is where any
custom arguments needed for the ingestion should be passed, for example:
the quandl bundle uses the enviornment to pass the API key and the download retry
attempt count.
asset_db_writer
asset_db_writer isan instance of AssetDBWriter. This is the writer for the asset metadata
which provides the asset lifetimes and the symbol to asset id (sid) mapping. This may
also contain the asset name, exchange and a few other columns. To write data,
invoke write() with dataframes for the various pieces of metadata. More information
about the format of the data exists in the docs for write.
minute_bar_writer
Note
The data passed to write() may be a lazy iterator or generator to avoid loading all of the
minute data into memory at a single time. A given sid may also appear multiple times in
the data as long as the dates are strictly increasing.
daily_bar_writer
daily_bar_writer is an instance of BcolzDailyBarWriter. This writer is used to convert data
into zipline’s internal bcolz format to later be read by a BcolzDailyBarReader. If daily data is
provided, users should call write() with an iterable of (sid dataframe) tuples.
The show_progress argument should also be forwarded to this method. If the data shource
does not provide daily data, then there is no need to call the write method. It is also
acceptable to pass an empty iterable to write() to signal that there is no daily data. If no
daily data is provided but minute data is provided, a daily rollup will happen to service
daily history requests.
Note
adjustment_writer is
an instance of SQLiteAdjustmentWriter. This writer is used to store splits,
mergers, dividends, and stock dividends. The data should be provided as dataframes
and passed to write(). Each of these fields are optional, but the writer can accept as
much of the data as you have.
calendar
calendar is
an instance of zipline.utils.calendars.TradingCalendar. The calendar is provided
to help some bundles generate queries for the days needed.
start_session
start_session is a pandas.Timestamp object indicating the first day that the bundle should
load data for.
end_session
end_session is a pandas.Timestamp object indicating the last day that the bundle should load
data for.
cache
cache is an instance of dataframe_cache. This object is a mapping from strings to
dataframes. This object is provided in case an ingestion crashes part way through. The
idea is that the ingest function should check the cache for raw data, if it doesn’t exist in
the cache, it should acquire it and then store it in the cache. Then it can parse and write
the data. The cache will be cleared only after a successful load, this prevents the ingest
function from needing to redownload all the data if there is some bug in the parsing. If it
is very fast to get the data, for example if it is coming from another local file, then there
is no need to use this cache.
show_progress
show_progress is
a boolean indicating that the user would like to receive feedback about
the ingest function’s progress fetching and writing the data. Some examples for where
to show how many files you have downloaded out of the total needed, or how far into
some data conversion the ingest function is. One tool that may help with
implementing show_progress for a loop ismaybe_show_progress. This argument should always
be forwarded to minute_bar_writer.write anddaily_bar_writer.write.
output_dir
output_dir is a string representing the file path where all the data will be
written. output_dir will be some subdirectory of $ZIPLINE_ROOT and will contain the time of
the start of the current ingestion. This can be used to directly move resources here if for
some reason your ingest function can produce it’s own outputs without the writers. For
example, the quantopian:quandl bundle uses this to directly untar the bundle into
the output_dir.
Release Notes
Release 1.0.2
Release
1.0.2
:
Enhancements
Adds forward fill checkpoint tables for the blaze core loader. This allow the loader
to more efficiently forward fill the data by capping the lower date it must search for
when querying data. The checkpoints should have novel deltas applied (#1276).
Updated VagrantFile to include all dev requirements and use a newer image
(#1310).
Allow correlations and regressions to be computed between two 2D factors by
doing computations asset-wise (#1307).
Filters have been made window_safe by default. Now they can be passed in as
arguments to other Filters, Factors and Classifiers (#1338).
Added an optional groupby parameter to rank(), top(), and bottom(). (#1349).
Added new pipeline filters, All and Any, which takes another filter and returns True
if an asset produced a True for any/all days in the previous window_length days
(#1358).
Added new pipeline filter AtLeastN, which takes another filter and an int N and
returns True if an asset produced a True on N or more days in the
previous window_length days (#1367).
Use external library empyrical for risk calculations. Empyrical unifies risk metric
calculations between pyfolio and zipline. Empyrical adds custom annualization
options for returns of custom frequencies. (#855)
Add Aroon factor. (#1258)
Add fast stochastic oscillator factor. (#1255)
Add a Dockerfile. (#1254)
New trading calendar which supports sessions which span across midnights, e.g.
24 hour 6:01PM-6:00PM sessions for futures trading. zipline.utils.tradingcalendar is
now deprecated. (#1138) (#1312)
Allow slicing a single column out of a Factor/Filter/Classifier. (#1267)
Provide Ichimoku Cloud factor (#1263)
Allow default parameters on Pipeline terms. (#1263)
Provide rate of change percentage factor. (#1324)
Provide linear weighted moving average factor. (#1325)
Add NotNullFilter. (#1345)
Allow capital changes to be defined by a target value. (#1337)
Add TrueRange factor. (#1348)
Add point in time lookups to assets.db. (#1361)
Make can_trade aware of the asset’s exchange . (#1346)
Add downsample method to all computable terms. (#1394)
Add QuantopianUSFuturesCalendar. (#1414)
Enable publishing of old assets.db versions. (#1430)
Enable schedule_function for Futures trading calendar. (#1442)
Disallow regressions of length 1. (#1466)
Experimental
Add support for comingled Future and Equity history windows, and enable other
Future data access via data portal. (#1435) (#1432)
Bug Fixes
Performance
Documentation
Testing
Add test fixture which sources daily pricing data from minute pricing data fixtures.
(#1243)
Release 1.0.1
Release
1.0.1
:
Enhancements
Bug Fixes
Release 1.0.0
Release
1.0.0
:
Highlights
We have rewritten a lot of Zipline and its basic concepts in order to improve runtime
performance. At the same time, we’ve introduced several new APIs.
At a high level, earlier versions of Zipline simulations pulled from a multiplexed stream
of data sources, which were merged via heapq. This stream was fed to the main
simulation loop, driving the clock forward. This strong dependency on reading all the
data made it difficult to optimize simulation performance because there was no
connection between the amount of data we fetched and the amount of data actually
used by the algorithm.
Now, we only fetch data when the algorithm needs it. A new class, DataPortal,
dispatches data requests to various data sources and returns the requested values.
This makes the runtime of a simulation scale much more closely with the complexity of
the algorithm, rather than with the number of assets provided by the data sources.
Instead of the data stream driving the clock, now simulations iterate through a pre-
calculated set of day or minute timestamps. The timestamps are emitted
by MinuteSimulationClock andDailySimulationClock, and consumed by the main loop
in transform().
You can now pass in an adjustments source to the DataPortal, and we will apply
adjustments to the pricing data when looking backwards at data. Prices and volumes for
execution and presented to the algorithm in data.current are the as-traded value of the
asset.
New Entry Points (#1173 and #1178)
In order to make it easier to use zipline we have updated the entry points for a backtest.
The three supported ways to run a backtest are now:
1. zipline.run_algo()
2. $ zipline run
3. %zipline (IPython magic)
1.0.0 introduces data bundles. Data bundles are groups of data that should be
preloaded and used to run backtests later. This allows users to not need to to specify
which tickers they are interested in each time they run an algorithm. This also allows us
to cache the data between runs.
@zipline.data.bundles.register('my-new-bundle')
def my_new_bundle_ingest(environ,
asset_db_writer,
minute_bar_writer,
daily_bar_writer,
adjustment_writer,
calendar,
cache,
show_progress):
...
This function should retrieve the data it needs and then use the writers that have been
passed to write that data to disc in a location that zipline can find later.
element_of()
startswith()
endswith()
has_substring()
matches()
element_of isdefined for all classifiers. The remaining methods are only defined for
string-dtype classifiers.
Enhancements
Made the data loading classes have more consistent interfaces. This includes the
equity bar writers, adjustment writer, and asset db writer. The new interface is that
the resource to be written to is passed at construction time and the data to write is
provided later to the writemethod as dataframes or some iterator of dataframes. This
model allows us to pass these writer objects around as a resource for other classes
and functions to consume (#1109 and #1149).
Added masking to zipline.pipeline.CustomFactor. Custom factors can now be
passed a Filter upon instantiation. This tells the factor to only compute over stocks
for which the filter returns True, rather than always computing over the entire
universe of stocks. (#1095)
Added zipline.utils.cache.ExpiringCache. A cache which wraps entries in
azipline.utils.cache.CachedObject, which manages expiration of entries based on
the dt supplied to the get method. (#1130)
Implemented zipline.pipeline.factors.RecarrayField, a new pipeline term designed
to be the output type of a CustomFactor with multiple outputs. (#1119)
Added optional outputs parameter to zipline.pipeline.CustomFactor. Custom factors
are now capable of computing and returning multiple outputs, each of which are
themselves a Factor. (#1119)
Added support for string-dtype pipeline columns. Loaders for thse columns
should produce instances of zipline.lib.labelarray.LabelArray when
traversed. latest() on string columns produces a string-
dtype zipline.pipeline.Classifier. (#1174)
Added several methods for converting Classifiers into Filters.
element_of isdefined for all classifiers. The remaining methods are only defined for
strings. (#1174)
Added BollingerBands factor. This factor implements the Bollinger Bands technical
indicator:https://en.wikipedia.org/wiki/Bollinger_Bands (#1199).
Fetcher has been moved from Quantopian internal code into Zipline (#1105).
Added new built-in
factors, RollingPearsonOfReturns, RollingSpearmanOfReturns andRollingLinearRegressionOfRetu
rns (#1154)
Experimental Features
Warning
Bug Fixes
None
Performance
None
None
Build
None
Documentation
Miscellaneous
Release 0.9.0
Release
0.9.0
:
Highlights
Added classifiers and normalization methods to pipeline, along with new datasets
and factors.
Added support for Windows with continuous integration on AppVeyor.
Enhancements
newCashBuybackAuthorizations and ShareBuybackAuthorizations datasets, respectively.
(#1022).
Added new built-in
factors, zipline.pipeline.factors.BusinessDaysSinceDividendAnnouncement,zipline.pipeline.fa
ctors.BusinessDaysUntilNextExDate ,
Experimental Features
Warning
None
Bug Fixes
Fixed a bug where merging two numerical expressions failed given too many
inputs. This caused running a pipeline to fail when combining more than ten factors
or filters. (#1072)
Performance
None
None
Build
Documentation
None
Miscellaneous
Release 0.8.4
Release
0.8.4
:
Highlights
Enhancements
Adds a way for users to provide a context manager to use when executing the
scheduled functions (including handle_data). This context manager will be passed
the BarData object for the bar and will be used for the duration of all of the functions
scheduled to run. This can be passed toTradingAlgorithm by the keyword
argument create_event_context (#828).
Added support for zipline.pipeline.factors.Factor instances
with datetime64[ns] dtypes. (#905)
Added a new EarningsCalendar dataset for use in the Pipeline API. This dataset
provides an abstract interface for adding earnings announcement data to a new
algorithm. A pandas-based reference implementation for this dataset can be found
in zipline.pipeline.loaders.earnings, and an experimental blaze-based implementation
can be found inzipline.pipeline.loaders.blaze.earnings. (#905).
Added new built-in
factors, zipline.pipeline.factors.BusinessDaysUntilNextEarnings andzipline.pipeline.factor
s.BusinessDaysSincePreviousEarnings . These factors use the newEarningsCalendar dataset.
(#905).
Added isnan(), notnan() and isfinite() methods
to zipline.pipeline.factors.Factor (#861).
Added zipline.pipeline.factors.Returns, a built-in factor which calculates the
percent change in close price over the given window_length. (#884).
Added a new built-in factor: AverageDollarVolume. (#927).
Added ExponentialWeightedMovingAverage and ExponentialWeightedMovingStdDev factors.
(#910).
Allow DataSet classes to be subclassed where subclasses inherit all of the
columns from the parent. These columns will be new sentinels so you can register
them a custom loader (#924).
Added coerce() to coerce inputs from one type into another before passing them
to the function (#948).
Added optionally() to wrap other preprocessor functions to explicitly
allow None (#947).
Added ensure_timezone() to allow string arguments to get converted
into datetime.tzinfoobjects. This also allows tzinfo objects to be passed directly
(#947).
Added two optional
arguments, data_query_time and data_query_tz to BlazeLoader andBlazeEarningsCalendarLoad
er. These arguments allow the user to specify some cutoff time for data when
Experimental Features
Warning
Fixes an issue that would cause the daily/minutely method caching to change
the len of aSIDData object. This would cause us to think that the object was not empty
even when it was (#826).
Fixes an error raised in calculating beta when benchmark data were sparse.
Instead numpy.nan is returned (#859).
Fixed an issue pickling sentinel() objects (#872).
Fixed spurious warnings on first download of treasury data (:issue 922).
Corrected the error messages for set_commission() and set_slippage() when used
outside of theinitialize function. These errors called the functions override_* instead
of set_*. This also renamed the exception types raised
from OverrideSlippagePostInit andOverrideCommissionPostInit to SetSlippagePostInit and Se
tCommissionPostInit (#923).
Fixed an issue in the CLI that would cause assets to be added twice. This would
map the same symbol to two different sids (#942).
Fixed an issue where the PerformancePeriod incorrectly reported the
total_positions_value when creating a Account (#950).
Fixed issues around KeyErrors coming from history and BarData on 32-bit
python, where Assets did not compare properly with int64s (#959).
Fixed a bug where boolean operators were not properly implemented
on Filter (#991).
Installation of zipline no longer downgrades numpy to 1.9.2 silently and
unconditionally (#969).
Performance
Build
Documentation
Miscellaneous
Release 0.8.3
Release 0.8.3
:
We advanced the version to 0.8.3 to fix a source distribution issue with pypi. There are
no code changes in this version.
Release 0.8.0
Release
0.8.0
:
Highlights
Enhancements
Account object: Adds an account object to context to track information about the
trading account. Example:
context.account.settled_cash
Returns the settled cash value that is stored on the account object. This value is
updated accordingly as the algorithm is run (#396).
HistoryContainer can now grow dynamically. Calls to history() will now be able to
increase the size or change the shape of the history container to be able to service
the call. add_history() now acts as a preformance hint to pre-allocate sufficient space
in the container. This change is backwards compatible with history, all existing
algorithms should continue to work as intended (#412).
Simple transforms ported from quantopian and use history. SIDData now has
methods for:
stddev
mavg
vwap
returns
These methods, except for returns, accept a number of days. If you are running with
minute data, then this will calculate the number of minutes in those days, accounting
for early closes and the current time and apply the transform over the set of
minutes. returns takes no parameters and will return the daily returns of the given
asset. Example:
data[security].stddev(3)
(#429).
o New fields in Performance Period. Performance Period has new fields
accessible in return value of to_dict: - gross leverage - net leverage - short exposure
- long exposure - shorts count - longs count (#464).
o Allow order_percent() to work with various market values (by Jeremiah
Lowin).
(#477).
o Command line option to for printing algo to stdout (by Andrea D’Amore)
(#545).
o New user defined function before_trading_start. This function can be
overridden by the user to be called once before the market opens every day (#389).
o New api function schedule_function(). This function allows the user to
schedule a function to be called based on more complicated rules about the date
and time. For example, call the function 15 minutes before market close respecting
early closes (#411).
o New api function set_do_not_order_list(). This function accepts a list of
assets and adds a trading guard that prevents the algorithm from trading them. Adds
a list point in time list of leveraged ETFs that people may want to mark as ‘do not
trade’ (#478).
o Adds a class for representing securities. order() and other order functions
now require an instance of Security instead of an int or string (#520).
o Generalize the Security class to Asset. This is in preperation of adding
support for other asset types (#535).
o New api function get_environment(). This function by default returns the
string 'zipline'. This is used so that algorithms can have different behavior on
Quantopian and local zipline (#384).
o Extends get_environment() to expose more of the environment to the
algorithm. The function now accepts an argument that is the field to return. By
default, this is 'platform' which returns the old value of 'zipline' but the following new
fields can be requested:
(#449).
o New api function set_max_leveraged(). This method adds a trading guard that
prevents your algorithm from over leveraging itself (#552).
Experimental Features
Warning
Adds new Pipeline API. The pipeline API is a high-level declarative API for
representing trailing window computations on large datasets (#630).
Adds support for futures trading (#637).
Adds Pipeline loader for blaze expressions. This allows users to pull data from
any format blaze understands and use it in the Pipeline API. (#775).
Bug Fixes
Fix a bug where the reported returns could sharply dip for random periods of time
(#378).
Fix a bug that prevented debuggers from resolving the algorithm file (#431).
Properly forward arguments to user defined initialize function (#687).
Fix a bug that would cause treasury data to be redownloaded every backtest
between midnight EST and the time when the treasury data was available (#793).
Fix a bug that would cause the user defined analyze function to not be called if it
was passed as a keyword argument to TradingAlgorithm (#819).
Performance
Build
None
Documentation
Switched to sphinx for the documentation (#816).
Release 0.7.0
Release
0.7.0
:
Highlights
Enhancements
Grabs the data from yahoo finance, runs the file dual_moving_avg.py (and looks
fordual_moving_avg_analyze.py which, if found, will be executed after the algorithm has
been run), and outputs the perf DataFrame to dma.pickle (#325).
IPython magic command (at the top of an IPython notebook cell). Example:
%%zipline --symbols AAPL --start 2011-1-1 --end 2012-1-1 -o perf
Does the same as above except instead of executing the file looks for the algorithm
in the cell and instead of outputting the perf df to a file, creates a variable in the
namespace called perf (#325).
Adds Trading Controls to the algorithm API.
The following functions are now available on TradingAlgorithm and for algo scripts:
set_max_order_size(self, sid=None, max_shares=None, max_notional=None) Set
a limit on the
absolute magnitude, in shares and/or total dollar value, of any single order placed by
this algorithm for a given sid. If sid is None, then the rule is applied to any order
placed by the algorithm. Example:
def initialize(context):
# Algorithm will raise an exception if we attempt to place an
# order which would cause us to hold more than 10 shares
# or 1000 dollars worth of sid(24).
set_max_order_size(sid(24), max_shares=10, max_notional=1000.0)
set_max_position_size(self, sid=None, max_shares=None, max_notional=None) -Seta limit on
the absolute magnitude, in either shares or dollar value, of any position held by the
algorithm for a given sid. If sid is None, then the rule is applied to any position held
by the algorithm. Example:
def initialize(context):
# Algorithm will raise an exception if we attempt to order more than
# 10 shares or 1000 dollars worth of sid(24) in a single order.
set_max_order_size(sid(24), max_shares=10, max_notional=1000.0)
``set_max_order_count(self, max_count)``
Set a limit on the number of orders that can be placed by the algorithm in
a single trading day.
Example:
def initialize(context):
# Algorithm will raise an exception if more than 50 orders are placed in a day.
set_max_order_count(50)
set_long_only(self) Set a rule specifying that the algorithm may not hold short
positions. Example:
def initialize(context):
# Algorithm will raise an exception if it attempts to place
# an order that would cause it to hold a short position.
set_long_only()
(#329).
Adds an all_api_methods classmethod on TradingAlgorithm that returns a list of
allTradingAlgorithm API methods (#333).
Expanded record() functionality for dynamic naming. The record() function can
now take positional args before the kwargs. All original usage and functionality is the
same, but now these extra usages will work:
name = 'Dynamically_Generated_String'
record( name, value, ... )
record( name, value1, 'name2', value2, name3=value3, name4=value4 )
The requirements are simply that the poritional args occur only before the kwargs
(#355).
history() has been ported from Quantopian to Zipline and provides moving
window of market data. history() replaces BatchTransform. It is faster, works for
minute level data and has a superior interface. To use it, call add_history() inside
of initialize() and then receive a pandasDataFrame by calling history() from
inside handle_data(). Check out the tutorial and an example. (#345 and #357).
history() now supports 1m window lengths (#345).
Bug Fixes
Fix alignment of trading days and open and closes in trading environment (#331).
RollingPanel fix when adding/dropping new fields (#349).
Performance
None
Removed undocumented and untested HDF5 and CSV data sources (#267).
Refactor sim_params (#352).
Refactoring of history (#340).
Build
The following dependencies have been updated (zipline might work with other
versions too):
-pytz==2013.9
+pytz==2014.4
+numpy==1.8.1
-numpy==1.8.0
+scipy==0.12.0
+patsy==0.2.1
+statsmodels==0.5.0
-six==1.5.2
+six==1.6.1
-Cython==0.20
+Cython==0.20.1
-TA-Lib==0.4.8
+--allow-external TA-Lib --allow-unverified TA-Lib TA-Lib==0.4.8
-requests==2.2.0
+requests==2.3.0
-nose==1.3.0
+nose==1.3.3
-xlrd==0.9.2
+xlrd==0.9.3
-pep8==1.4.6
+pep8==1.5.7
-pyflakes==0.7.3
-pip-tools==0.3.4
+pyflakes==0.8.1`
-scipy==0.13.2
-tornado==3.2
-pyparsing==2.0.1
-patsy==0.2.1
-statsmodels==0.4.3
+tornado==3.2.1
+pyparsing==2.0.2
-Markdown==2.3.1
+Markdown==2.4.1
Contributors
The following people have contributed to this release, ordered by numbers of commit:
38 Scott Sanderson
29 Thomas Wiecki
26 Eddie Hebert
6 Delaney Granizo-Mackenzie
3 David Edwards
3 Richard Frank
2 Jonathan Kamens
1 Pankaj Garg
1 Tony Lambiris
1 fawce
Release 0.6.1
Release
0.6.1
:
Highlights
Enhancements
Always process new orders i.e. on bars where handle_data isn’t called, but there is
‘clock’ data e.g. a consistent benchmark, process orders.
Empty positions are now filtered from the portfolio container. To help prevent
algorithms from operating on positions that are not in the existing universe of stocks.
Formerly, iterating over positions would return positions for stocks which had zero
shares held. (Where an explicit check in algorithm code for pos.amount != 0 could
prevent from using a non-existent position.)
Add trading calendar for BMF&Bovespa.
Add beginning of algo script support.
Starts on the path of parity with the script syntax in Quantopian’s IDE
on https://quantopian.comExample:
from datetime import datetime import pytz
from zipline import TradingAlgorithm
from zipline.utils.factory import load_from_yahoo
from zipline.api import order
def initialize(context):
context.test = 10
def handle_date(context, data):
order('AAPL', 10)
print(context.test)
if __name__ == '__main__':
import pylab as pl
start = datetime(2008, 1, 1, 0, 0, 0, 0, pytz.utc)
end = datetime(2010, 1, 1, 0, 0, 0, 0, pytz.utc)
data = load_from_yahoo(
stocks=['AAPL'],
indexes={},
start=start,
end=end)
data = data.dropna()
algo = TradingAlgorithm(
initialize=initialize,
handle_data=handle_date)
results = algo.run(data)
results.portfolio_value.plot()
pl.show()
The source for the open and close/early close calendar and the trading
day calendar is now the same, which should help prevent potential issues due to
misalignment.
Allows configurations where the benchmark is provided as a generator
based data source to need to supply a second benchmark list just to populate
dates.
o Port history() API method from Quantopian. Opens the core of
the history() function that was previously only available on the Quantopian platform.
def initialize(context):
add_history(bar_count=2, frequency='1d', field='price')
cumulative.beta
cumulative.alpha
cumulative.information
cumulative.sharpe
period.sortino
How Risk Calculations Are Changing Risk Fixes for Both Period and Cumulative
Downside Risk
Across the board the standard deviation has been standardized to using a ‘sample’
calculation, whereas before cumulative risk was mostly using ‘population’.
Using ddof=1 with np.stdcalculates as if the values are a sample.
Beta
Use the daily algorithm returns and benchmarks instead of annualized mean returns.
Volatility
The volatility is an input to other calculations so this change affects Sharpe and
Information ratio calculations.
Information Ratio
The benchmark returns input is changed from annualized benchmark returns to the
annualized mean returns.
Alpha
The benchmark returns input is changed from annualized benchmark returns to the
annualized mean returns.
Sortino
Now uses the downside risk of the daily return vs. the mean algorithm returns for the
minimum acceptable return instead of the treasury return.
The above required adding the calculation of the mean algorithm returns for period
risk.
Also, uses algorithm_period_returns and tresaury_period_return as the cumulative
Sortino does, instead of using algorithm returns for both inputs into the Sortino
calculation.
Performance
Added support for building and releasing via conda For those who prefer building
withhttp://conda.pydata.org/ to compiling locally with pip. The following should install
Zipline on many systems.
conda install -c quantopian zipline
Contributors
The following people have contributed to this release, ordered by numbers of commit:
49 Eddie Hebert
28 Thomas Wiecki
11 Richard Frank
2 Jamie Kirkpatrick
2 Jeremiah Lowin
1 Colin Alexander
1 Michael Schatzow
1 Moises Trovo
1 Suminda Dharmasena
Next Previous