Ziplin: Discovering Available Bundles

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 37

Ziplin

http://www.zipline.io/bundles.html#writing-a-new-bundle 

(we’ll use this to import market data sets from IDX exchange and others)

Data Bundles
A data bundle is a collection of pricing data, adjustment data, and an asset database.
Bundles allow us to preload all of the data we will need to run backtests and store the
data for future runs.

Discovering Available Bundles


Zipline comes with a few bundles by default as well as the ability to register new
bundles. To see which bundles we have have available, we may run
the bundles command, for example:

$ zipline bundles
my-custom-bundle 2016-05-05 20:35:19.809398
my-custom-bundle 2016-05-05 20:34:53.654082
my-custom-bundle 2016-05-05 20:34:48.401767
quandl <no ingestions>
quantopian-quandl 2016-05-05 20:06:40.894956

The output here shows that there are 3 bundles available:

 my-custom-bundle (added by the user)


 quandl (provided by zipline)

 quantopian-quandl (provided by zipline)

The dates and times next to the name show the times when the data for this bundle was
ingested. We have run three different ingestions for my-custom-bundle. We have never
ingested any data for the quandl bundle so it just shows <no ingestions> instead. Finally,
there is only one ingestion forquantopian-quandl.

Ingesting Data
The first step to using a data bundle is to ingest the data. The ingestion process will
invoke some custom bundle command and then write the data to a standard location
that zipline can find. By default the location where ingested data will be written
is $ZIPLINE_ROOT/data/<bundle> where by default ZIPLINE_ROOT=~/.zipline. The ingestion step
may take some time as it could involve downloading and processing a lot of data. This
can be run with:

$ zipline ingest [-b <bundle>]

where <bundle> is the name of the bundle to ingest, defaulting to quantopian-quandl.

Old Data
When the ingest command is used it will write the new data to a subdirectory
of$ZIPLINE_ROOT/data/<bundle> which is named with the current date. This makes it possible
to look at older data or even run backtests with the older copies. Running a backtest
with an old ingestion makes it easier to reproduce backtest results later.

One drawback of saving all of the data by default is that the data directory may grow
quite large even if you do not want to use the data. As shown earlier, we can list all of
the ingestions with the bundles command. To solve the problem of leaking old data
there is another command: clean, which will clear data bundles based on some time
constraints.

For example:

# clean everything older than <date>


$ zipline clean [-b <bundle>] --before <date>

# clean everything newer than <date>


$ zipline clean [-b <bundle>] --after <date>

# keep everything in the range of [before, after] and delete the rest
$ zipline clean [-b <bundle>] --before <date> --after <after>

# clean all but the last <int> runs


$ zipline clean [-b <bundle>] --keep-last <int>

Running Backtests with Data Bundles


Now that the data has been ingested we can use it to run backtests with
the run command. The bundle to use can be specified with the --bundle option like:

$ zipline run --bundle <bundle> --algofile algo.py ...


We may also specify the date to use to look up the bundle data with the --bundle-
date option. Setting the --bundle-date will cause run to use the most recent bundle

ingestion that is less than or equal to the bundle-date. This is how we can run backtests
with older data. The reason that-bundle-date uses a less than or equal to relationship is
that we can specify the date that we ran an old backtest and get the same data that
would have been available to us on that date. Thebundle-date defaults to the current day
to use the most recent data.

Default Data Bundles


Quandl WIKI Bundle

By default zipline comes with the quandl data bundle which uses quandl’s WIKI dataset.
The quandl data bundle includes daily pricing data, splits, cash dividends, and asset
metadata. To ingest thequandl data bundle we recommend creating an account on
quandl.com to get an API key to be able to make more API requests per day. Once we
have an API key we may run:

$ QUANDL_API_KEY=<api-key> zipline ingest -b quandl

though we may still run ingest as an anonymous quandl user (with no API key). We may
also set theQUANDL_DOWNLOAD_ATTEMPTS environment variable to an integer which is the
number of attempts that should be made to download data from quandls servers. By
default QUANDL_DOWNLOAD_ATTEMPTS will be 5, meaning that we will retry each attempt 5 times.

Note

QUANDL_DOWNLOAD_ATTEMPTS is
not the total number of allowed failures, just the number of
allowed failures per request. The quandl loader will make one request per 100 equities
for the metadata followed by one request per equity.

Quantopian Quandl WIKI Mirror

Quantopian provides a mirror of the quandl WIKI dataset with the data in the formats
that zipline expects. This is available under the name: quantopian-quandl and is the default
bundle for zipline.

Yahoo Bundle Factories


Zipline also ships with a factory function for creating a data bundle out of a set of tickers
from yahoo:yahoo_equities(). yahoo_equities() makes it easy to pre-download and cache
the data for a set of equities from yahoo. The yahoo bundles include daily pricing data
along with splits, cash dividends, and inferred asset metadata. To create a bundle from
a set of equities, add the following to your~/.zipline/extensions.py file:

from zipline.data.bundles import register, yahoo_equities

# these are the tickers you would like data for


equities = {
'AAPL',
'MSFT',
'GOOG',
}
register(
'my-yahoo-equities-bundle', # name this whatever you like
yahoo_equities(equities),
)

This may now be used like:

$ zipline ingest -b my-yahoo-equities-bundle


$ zipline run -f algo.py --bundle my-yahoo-equities-bundle

More than one yahoo equities bundle may be registered as long as they use different
names.

Writing a New Bundle


Data bundles exist to make it easy to use different data sources with zipline. To add a
new bundle, one must implement an ingest function.

The ingest function is responsible for loading the data into memory and passing it to a
set of writer objects provided by zipline to convert the data to zipline’s internal format.
The ingest function may work by downloading data from a remote location like
the quandl bundle or yahoo bundles or it may just load files that are already on the
machine. The function is provided with writers that will write the data to the correct
location transactionally. If an ingestion fails part way through the bundle will not be
written in an incomplete state.

The signature of the ingest function should be:

ingest(environ,
asset_db_writer,
minute_bar_writer,
daily_bar_writer,
adjustment_writer,
calendar,
start_session,
end_session,
cache,
show_progress,
output_dir)

environ

environ isa mapping representing the environment variables to use. This is where any
custom arguments needed for the ingestion should be passed, for example:
the quandl bundle uses the enviornment to pass the API key and the download retry
attempt count.

asset_db_writer

asset_db_writer isan instance of AssetDBWriter. This is the writer for the asset metadata
which provides the asset lifetimes and the symbol to asset id (sid) mapping. This may
also contain the asset name, exchange and a few other columns. To write data,
invoke write() with dataframes for the various pieces of metadata. More information
about the format of the data exists in the docs for write.

minute_bar_writer

minute_bar_writer is an instance of BcolzMinuteBarWriter. This writer is used to convert data


to zipline’s internal bcolz format to later be read by a BcolzMinuteBarReader. If minute data
is provided, users should call write() with an iterable of (sid, dataframe) tuples.
The show_progress argument should also be forwarded to this method. If the data source
does not provide minute level data, then there is no need to call the write method. It is
also acceptable to pass an empty iterator to write() to signal that there is no minutely
data.

Note

The data passed to write() may be a lazy iterator or generator to avoid loading all of the
minute data into memory at a single time. A given sid may also appear multiple times in
the data as long as the dates are strictly increasing.
daily_bar_writer
daily_bar_writer is an instance of BcolzDailyBarWriter. This writer is used to convert data
into zipline’s internal bcolz format to later be read by a BcolzDailyBarReader. If daily data is
provided, users should call write() with an iterable of (sid dataframe) tuples.
The show_progress argument should also be forwarded to this method. If the data shource
does not provide daily data, then there is no need to call the write method. It is also
acceptable to pass an empty iterable to write() to signal that there is no daily data. If no
daily data is provided but minute data is provided, a daily rollup will happen to service
daily history requests.

Note

Like the minute_bar_writer, the data passed to write() may be a lazy iterable or generator


to avoid loading all of the data into memory at once. Unlike the minute_bar_writer, a sid
may only appear once in the data iterable.
adjustment_writer

adjustment_writer is
an instance of SQLiteAdjustmentWriter. This writer is used to store splits,
mergers, dividends, and stock dividends. The data should be provided as dataframes
and passed to write(). Each of these fields are optional, but the writer can accept as
much of the data as you have.

calendar

calendar is
an instance of zipline.utils.calendars.TradingCalendar. The calendar is provided
to help some bundles generate queries for the days needed.

start_session

start_session is a pandas.Timestamp object indicating the first day that the bundle should
load data for.

end_session

end_session is a pandas.Timestamp object indicating the last day that the bundle should load
data for.

cache
cache is an instance of dataframe_cache. This object is a mapping from strings to
dataframes. This object is provided in case an ingestion crashes part way through. The
idea is that the ingest function should check the cache for raw data, if it doesn’t exist in
the cache, it should acquire it and then store it in the cache. Then it can parse and write
the data. The cache will be cleared only after a successful load, this prevents the ingest
function from needing to redownload all the data if there is some bug in the parsing. If it
is very fast to get the data, for example if it is coming from another local file, then there
is no need to use this cache.

show_progress

show_progress is
a boolean indicating that the user would like to receive feedback about
the ingest function’s progress fetching and writing the data. Some examples for where
to show how many files you have downloaded out of the total needed, or how far into
some data conversion the ingest function is. One tool that may help with
implementing show_progress for a loop ismaybe_show_progress. This argument should always
be forwarded to minute_bar_writer.write anddaily_bar_writer.write.

output_dir

output_dir is a string representing the file path where all the data will be
written. output_dir will be some subdirectory of $ZIPLINE_ROOT and will contain the time of
the start of the current ingestion. This can be used to directly move resources here if for
some reason your ingest function can produce it’s own outputs without the writers. For
example, the quantopian:quandl bundle uses this to directly untar the bundle into
the output_dir.

Release Notes
Release 1.0.2
Release
1.0.2
:

Date: September 8, 2016

Enhancements
 Adds forward fill checkpoint tables for the blaze core loader. This allow the loader
to more efficiently forward fill the data by capping the lower date it must search for
when querying data. The checkpoints should have novel deltas applied (#1276).
 Updated VagrantFile to include all dev requirements and use a newer image
(#1310).
 Allow correlations and regressions to be computed between two 2D factors by
doing computations asset-wise (#1307).
 Filters have been made window_safe by default. Now they can be passed in as
arguments to other Filters, Factors and Classifiers (#1338).
 Added an optional groupby parameter to rank(), top(), and bottom(). (#1349).
 Added new pipeline filters, All and Any, which takes another filter and returns True
if an asset produced a True for any/all days in the previous window_length days
(#1358).
 Added new pipeline filter AtLeastN, which takes another filter and an int N and
returns True if an asset produced a True on N or more days in the
previous window_length days (#1367).
 Use external library empyrical for risk calculations. Empyrical unifies risk metric
calculations between pyfolio and zipline. Empyrical adds custom annualization
options for returns of custom frequencies. (#855)
 Add Aroon factor. (#1258)
 Add fast stochastic oscillator factor. (#1255)
 Add a Dockerfile. (#1254)
 New trading calendar which supports sessions which span across midnights, e.g.
24 hour 6:01PM-6:00PM sessions for futures trading. zipline.utils.tradingcalendar is
now deprecated. (#1138) (#1312)
 Allow slicing a single column out of a Factor/Filter/Classifier. (#1267)
 Provide Ichimoku Cloud factor (#1263)
 Allow default parameters on Pipeline terms. (#1263)
 Provide rate of change percentage factor. (#1324)
 Provide linear weighted moving average factor. (#1325)
 Add NotNullFilter. (#1345)
 Allow capital changes to be defined by a target value. (#1337)
 Add TrueRange factor. (#1348)
 Add point in time lookups to assets.db. (#1361)
 Make can_trade aware of the asset’s exchange . (#1346)
 Add downsample method to all computable terms. (#1394)
 Add QuantopianUSFuturesCalendar. (#1414)
 Enable publishing of old assets.db versions. (#1430)
 Enable schedule_function for Futures trading calendar. (#1442)
 Disallow regressions of length 1. (#1466)

Experimental

 Add support for comingled Future and Equity history windows, and enable other
Future data access via data portal. (#1435) (#1432)

Bug Fixes

 Changes AverageDollarVolume built-in factor to treat missing close or volume values


as 0. Previously, NaNs were simply discarded before averaging, giving the
remaining values too much weight (#1309).
 Remove risk-free rate from sharpe ratio calculation. The ratio is now the average
of risk adjusted returns over violatility of adjusted returns. (#853)
 Sortino ratio will return calculation instead of np.nan when required returns are
equal to zero. The ratio now returns the average of risk adjusted returns over
downside risk. Fixed mislabeled API by converting mar to downside_risk. (#747)
 Downside risk now returns the square root of the mean of downside difference
squares. (#747)
 Information ratio updated to return mean of risk adjusted returns over standard
deviation of risk adjusted returns. (#1322)
 Alpha and sharpe ratio are now annualized. (#1322)
 Fix units during reading and writing of daily bar ``first_trading_day `` attribute.
(#1245)
 Optional dispatch modules, when missing, no longer cause a NameError.
(#1246)
 Treat schedule_function argument as a time rule when a time rule, but no date rule
is supplied. (#1221)
 Protect against boundary conditions at beginning and end trading day in
schedule function. (#1226)
 Apply adjustments to previous day when using history with a frequency of 1d.
(#1256)
 Fail fast on invalid pipeline columns, instead of attempting to access the
nonexistent column. (#1280)
 Fix AverageDollarVolume NaN handling. (#1309)

Performance

 Performance improvements to blaze core loader. (#1227)


 Allow concurrent blaze queries. (#1323)
 Prevent missing leading bcolz minute data from doing repeated unnecessary
lookups. (#1451)
 Cache future chain lookups. (#1455)

Maintenance and Refactorings

 Removed remaining mentions of add_history. (#1287)

Documentation

Testing

 Add test fixture which sources daily pricing data from minute pricing data fixtures.
(#1243)

Data Format Changes

 BcolzDailyBarReader and BcolzDailyBarWriter use trading calendar instance, instead


of trading days serialized to JSON. (#1330)
 Change format of assets.db to support point in time lookups. (#1361)
 Change BcolzMinuteBarReader``and ``BcolzMinuteBarWriter to support varying tick
sizes. (#1428)

Release 1.0.1
Release
1.0.1
:

Date: May 27, 2016


This is a minor bug-fix release from 1.0.0 and includes a small number of bug fixes and
documentation improvements.

Enhancements

 Added support for user-defined commission models. See


thezipline.finance.commission.CommissionModel class for more details on implementing a
commision model. (#1213)
 Added support for non-float columns to Blaze-backed Pipeline datasets (#1201).
 Added zipline.pipeline.slice.Slice, a new pipeline term designed to extract a
single column from another term. Slices can be created by indexing into a term,
keyed by asset. (#1267)

Bug Fixes

 Fixed a bug where Pipeline loaders were not properly initialized


by zipline.run_algorithm(). This also affected invocations of zipline run from the CLI.
 Fixed a bug that caused the %%zipline IPython cell magic to fail
(533233fae43c7ff74abfb0044f046978817cb4e4).
 Fixed a bug in the PerTrade commission model where commissions were
incorrectly applied to each partial-fill of an order rather than on the order itself,
resulting in algorithms being charged too much in commissions when placing large
orders.

PerTrade now correctly applies commissions on a per-order basis (#1213).


 Attribute accesses on CustomFactors defining multiple outputs will now correctly
return an output slice when the output is also the name of a Factor method (#1214).
 Replaced deprecated usage of pandas.io.data with pandas_datareader (#1218).
 Fixed an issue where .pyi stub files for zipline.api were accidentally excluded
from the PyPI source distribution. Conda users should be unaffected (#1230).
Documentation

 Added a new example, zipline.examples.momentum_pipeline, which exercises the


Pipeline API (#1230).

Release 1.0.0
Release
1.0.0
:

Date: May 19, 2016

Highlights

Zipline 1.0 Rewrite (#1105)

We have rewritten a lot of Zipline and its basic concepts in order to improve runtime
performance. At the same time, we’ve introduced several new APIs.

At a high level, earlier versions of Zipline simulations pulled from a multiplexed stream
of data sources, which were merged via heapq. This stream was fed to the main
simulation loop, driving the clock forward. This strong dependency on reading all the
data made it difficult to optimize simulation performance because there was no
connection between the amount of data we fetched and the amount of data actually
used by the algorithm.

Now, we only fetch data when the algorithm needs it. A new class, DataPortal,
dispatches data requests to various data sources and returns the requested values.
This makes the runtime of a simulation scale much more closely with the complexity of
the algorithm, rather than with the number of assets provided by the data sources.

Instead of the data stream driving the clock, now simulations iterate through a pre-
calculated set of day or minute timestamps. The timestamps are emitted
by MinuteSimulationClock andDailySimulationClock, and consumed by the main loop
in transform().

We’ve retired the data[sid(N)] and history APIs, replacing them with several methods on


theBarData object: current(), history(), can_trade(), and is_stale(). Old APIs will continue to
work for now, but will issue deprecation warnings.

You can now pass in an adjustments source to the DataPortal, and we will apply
adjustments to the pricing data when looking backwards at data. Prices and volumes for
execution and presented to the algorithm in data.current are the as-traded value of the
asset.
New Entry Points (#1173 and #1178)

In order to make it easier to use zipline we have updated the entry points for a backtest.
The three supported ways to run a backtest are now:

1. zipline.run_algo()

2. $ zipline run

3. %zipline (IPython magic)

Data Bundles (#1173 and #1178)

1.0.0 introduces data bundles. Data bundles are groups of data that should be
preloaded and used to run backtests later. This allows users to not need to to specify
which tickers they are interested in each time they run an algorithm. This also allows us
to cache the data between runs.

By default, the quantopian-quandl bundle will be used which pulls data from Quantopian’s


mirror of the quandl WIKI dataset. New bundles may be registered
with zipline.data.bundles.register() like:

@zipline.data.bundles.register('my-new-bundle')
def my_new_bundle_ingest(environ,
asset_db_writer,
minute_bar_writer,
daily_bar_writer,
adjustment_writer,
calendar,
cache,
show_progress):
...

This function should retrieve the data it needs and then use the writers that have been
passed to write that data to disc in a location that zipline can find later.

This data can be used in backtests by passing the name as the -b / --bundle argument


to$ zipline run or as the bundle argument to zipline.run_algorithm().

For more information see Data Bundles for more information.

String Support in Pipeline (#1174)


Added support for string data in Pipeline. zipline.pipeline.data.Column now
accepts object as a dtype, which signifies that loaders for that column should emit
windowed iterators over the experimental new LabelArray class.

Several new Classifier methods have also been added for constructing Filter instances


based on string operations. The new methods are:

 element_of()

 startswith()

 endswith()

 has_substring()

 matches()

element_of isdefined for all classifiers. The remaining methods are only defined for
string-dtype classifiers.

Enhancements

 Made the data loading classes have more consistent interfaces. This includes the
equity bar writers, adjustment writer, and asset db writer. The new interface is that
the resource to be written to is passed at construction time and the data to write is
provided later to the writemethod as dataframes or some iterator of dataframes. This
model allows us to pass these writer objects around as a resource for other classes
and functions to consume (#1109 and #1149).
 Added masking to zipline.pipeline.CustomFactor. Custom factors can now be
passed a Filter upon instantiation. This tells the factor to only compute over stocks
for which the filter returns True, rather than always computing over the entire
universe of stocks. (#1095)
 Added zipline.utils.cache.ExpiringCache. A cache which wraps entries in
azipline.utils.cache.CachedObject, which manages expiration of entries based on
the dt supplied to the get method. (#1130)
 Implemented zipline.pipeline.factors.RecarrayField, a new pipeline term designed
to be the output type of a CustomFactor with multiple outputs. (#1119)
 Added optional outputs parameter to zipline.pipeline.CustomFactor. Custom factors
are now capable of computing and returning multiple outputs, each of which are
themselves a Factor. (#1119)
 Added support for string-dtype pipeline columns. Loaders for thse columns
should produce instances of zipline.lib.labelarray.LabelArray when
traversed. latest() on string columns produces a string-
dtype zipline.pipeline.Classifier. (#1174)
 Added several methods for converting Classifiers into Filters.

The new methods are:


- element_of() - startswith() - endswith() - has_substring() - matches()

element_of isdefined for all classifiers. The remaining methods are only defined for
strings. (#1174)
 Added BollingerBands factor. This factor implements the Bollinger Bands technical
indicator:https://en.wikipedia.org/wiki/Bollinger_Bands (#1199).
 Fetcher has been moved from Quantopian internal code into Zipline (#1105).
 Added new built-in
factors, RollingPearsonOfReturns, RollingSpearmanOfReturns andRollingLinearRegressionOfRetu
rns (#1154)
Experimental Features
Warning

Experimental features are subject to change.

 Added a new zipline.lib.labelarray.LabelArray class for efficiently representing and


computing on string data with numpy. This class is conceptually similar
to pandas.Categorical, in that it represents string arrays as arrays of indices into a
(smaller) array of unique string values. (#1174)

Bug Fixes

None

Performance

None

Maintenance and Refactorings

None
Build

None

Documentation

 Updated documentation for the API methods (#1188).


 Updated release process to mention that docs should be built with python 3
(#1188).

Miscellaneous

 Zipline now provides a stub file for the zipline.api module. This module is


normally dynamically created so the stub file provides some static information for
utilities that can consume it, for example PyCharm (#1208).

Release 0.9.0
Release
0.9.0
:

Date: March 29, 2016

Highlights

 Added classifiers and normalization methods to pipeline, along with new datasets
and factors.
 Added support for Windows with continuous integration on AppVeyor.

Enhancements

 Added new datasets CashBuybackAuthorizations and ShareBuybackAuthorizations for use


in the Pipeline API. These datasets provide an abstract interface for adding cash
and share buyback authorizations data, respectively, to a new algorithm. pandas-
based reference implementations for these datasets can be found
in zipline.pipeline.loaders.buyback_auth, and experimental blaze-based
implementations can be found in zipline.pipeline.loaders.blaze.buyback_auth. (#1022).
 Added new datasets DividendsByExDate, DividendsByPayDate,
and DividendsByAnnouncementDate for use in the Pipeline API. These datasets provide an
abstract interface for adding dividends data organized by ex date, pay date, and
announcement date, respectively, to a new algorithm. pandas-based reference
implementations for these datasets can be found inzipline.pipeline.loaders.dividends,
and experimental blaze-based implementations can be found
in zipline.pipeline.loaders.blaze.dividends. (#1093).
 Added new built-in
factors, zipline.pipeline.factors.BusinessDaysSinceCashBuybackAuth  andzipline.pipeline.fac
tors.BusinessDaysSinceShareBuybackAuth. These factors use the

newCashBuybackAuthorizations and ShareBuybackAuthorizations datasets, respectively.
(#1022).
 Added new built-in
factors, zipline.pipeline.factors.BusinessDaysSinceDividendAnnouncement,zipline.pipeline.fa
ctors.BusinessDaysUntilNextExDate ,

andzipline.pipeline.factors.BusinessDaysSincePreviousExDate . These factors use the


newDividendsByAnnouncementDate` and ``DividendsByExDate datasets, respectively. (#1093).
 Implemented zipline.pipeline.Classifier, a new core pipeline API term
representing grouping keys. Classifiers are primarily used by passing them as
the groupby parameter to factor normalization methods. (#1046)
 Added factor normalization
methods: zipline.pipeline.Factor.demean() andzipline.pipeline.Factor.zscore(). (#1046)
 Added zipline.pipeline.Factor.quantiles(), a method for computing a Classifier
from a Factor by partitioning into equally-sized buckets. Also added helpers for
common quantile sizes
(zipline.pipeline.Factor.quartiles(), zipline.pipeline.Factor.quartiles(),
andzipline.pipeline.Factor.deciles()) (#1075).

Experimental Features
Warning

Experimental features are subject to change.

None

Bug Fixes
 Fixed a bug where merging two numerical expressions failed given too many
inputs. This caused running a pipeline to fail when combining more than ten factors
or filters. (#1072)

Performance

None

Maintenance and Refactorings

None

Build

 Added AppVeyor for continuous integration on Windows. Added conda build of


zipline and its dependencies to AppVeyor and Travis builds, which upload their
results to anaconda.org labeled with “ci”. (#981)

Documentation

None

Miscellaneous

 Adds ZiplineTestCase which provides hooks to consume test fixtures. Fixtures are


things like:WithAssetFinder which will make self.asset_finder available to your test with
some mock data (#1042).

Release 0.8.4
Release
0.8.4
:

Date: February 24, 2016

Highlights

 Added a new EarningsCalendar dataset for use in the Pipeline API. (#905).


 AssetFinder speedups (#830 and #817).
 Improved support for non-float dtypes in Pipeline. Most notably, we now
support datetime64 andint64 dtypes for Factor, and BoundColumn.latest now returns a
proper Filter object when the column is of dtype bool.
 Zipline now supports numpy 1.10, pandas 0.17, and scipy 0.16 (#969).
 Batch transforms have been deprecated and will be removed in a future release.
Using history is recommended as an alternative.

Enhancements

 Adds a way for users to provide a context manager to use when executing the
scheduled functions (including handle_data). This context manager will be passed
the BarData object for the bar and will be used for the duration of all of the functions
scheduled to run. This can be passed toTradingAlgorithm by the keyword
argument create_event_context (#828).
 Added support for zipline.pipeline.factors.Factor instances
with datetime64[ns] dtypes. (#905)
 Added a new EarningsCalendar dataset for use in the Pipeline API. This dataset
provides an abstract interface for adding earnings announcement data to a new
algorithm. A pandas-based reference implementation for this dataset can be found
in zipline.pipeline.loaders.earnings, and an experimental blaze-based implementation
can be found inzipline.pipeline.loaders.blaze.earnings. (#905).
 Added new built-in
factors, zipline.pipeline.factors.BusinessDaysUntilNextEarnings andzipline.pipeline.factor
s.BusinessDaysSincePreviousEarnings . These factors use the newEarningsCalendar dataset.

(#905).
 Added isnan(), notnan() and isfinite() methods
to zipline.pipeline.factors.Factor (#861).
 Added zipline.pipeline.factors.Returns, a built-in factor which calculates the
percent change in close price over the given window_length. (#884).
 Added a new built-in factor: AverageDollarVolume. (#927).
 Added ExponentialWeightedMovingAverage and ExponentialWeightedMovingStdDev factors.
(#910).
 Allow DataSet classes to be subclassed where subclasses inherit all of the
columns from the parent. These columns will be new sentinels so you can register
them a custom loader (#924).
 Added coerce() to coerce inputs from one type into another before passing them
to the function (#948).
 Added optionally() to wrap other preprocessor functions to explicitly
allow None (#947).
 Added ensure_timezone() to allow string arguments to get converted
into datetime.tzinfoobjects. This also allows tzinfo objects to be passed directly
(#947).
 Added two optional
arguments, data_query_time and data_query_tz to BlazeLoader andBlazeEarningsCalendarLoad
er. These arguments allow the user to specify some cutoff time for data when

loading from the resource. For example, if I want to simulate executing


mybefore_trading_start function at 8:45 US/Eastern then I could
pass datetime.time(8, 45) and'US/Eastern' to the loader. This means that data that is
timestamped on or after 8:45 will not seen on that day in the simulation. The data will
be made available on the next day (#947).
 BoundColumn.latest now returns a Filter for columns of dtype bool (#962).

 Added support for Factor instances with int64 dtype. Column now requires


a missing_valuewhen dtype is integral. (#962)
 It is also now possible to specify custom missing_value values for float, datetime,
and boolPipeline terms. (#962)
 Added auto-close support for equities. Any positions held in an equity that
reaches itsauto_close_date will be liquidated for cash according to the equity’s last sale
price. Furthermore, any open orders for that equity will be canceled. Both futures
and equities are now auto-closed on the morning of their auto_close_date, immediately
prior to before_trading_start. (#982)

Experimental Features
Warning

Experimental features are subject to change.

 Added support for parameterized Factor subclasses. Factors may


specify params as a class-level attribute containing a tuple of parameter names.
These values are then accepted by the constructor and forwarded by name to the
factor’s compute function. This API is experimental, and may change in future
releases.
Bug Fixes

 Fixes an issue that would cause the daily/minutely method caching to change
the len of aSIDData object. This would cause us to think that the object was not empty
even when it was (#826).
 Fixes an error raised in calculating beta when benchmark data were sparse.
Instead numpy.nan is returned (#859).
 Fixed an issue pickling sentinel() objects (#872).
 Fixed spurious warnings on first download of treasury data (:issue 922).
 Corrected the error messages for set_commission() and set_slippage() when used
outside of theinitialize function. These errors called the functions override_* instead
of set_*. This also renamed the exception types raised
from OverrideSlippagePostInit andOverrideCommissionPostInit to SetSlippagePostInit and Se
tCommissionPostInit (#923).
 Fixed an issue in the CLI that would cause assets to be added twice. This would
map the same symbol to two different sids (#942).
 Fixed an issue where the PerformancePeriod incorrectly reported the
total_positions_value when creating a Account (#950).
 Fixed issues around KeyErrors coming from history and BarData on 32-bit
python, where Assets did not compare properly with int64s (#959).
 Fixed a bug where boolean operators were not properly implemented
on Filter (#991).
 Installation of zipline no longer downgrades numpy to 1.9.2 silently and
unconditionally (#969).

Performance

 Speeds up lookup_symbol() by adding an extension, AssetFinderCachedEquities, that


loads equities into dictionaries and then directs lookup_symbol() to these dictionaries to
find matching equities (#830).
 Improved performance of lookup_symbol() by performing batched queries. (#817).

Maintenance and Refactorings

 Asset databases now contain version information to ensure compatibility with


current Zipline version (#815).
 Upgrade requests version to 2.9.1 (2ee40db)
 Upgrade logbook version to 0.12.5 (11465d9).
 Upgrade Cython version to 0.23.4 (5f49fa2).

Build

 Makes zipline install requirements more flexible (#825).


 Use versioneer to manage the project __version__ and setup.py version (#829).
 Fixed coveralls integration on travis build (#840).
 Fixed conda build, which now uses git source as its source and reads
requirements using setup.py, instead of copying them and letting them get out of
sync (#937).
 Require setuptools > 18.0 (#951).

Documentation

 Document the release process for developers (#835).


 Added reference docs for the Pipeline API. (#864).
 Added reference docs for Asset Metadata APIs. (#864).
 Generated documentation now includes links to source code for many classes
and functions. (#864).
 Added platform-specific documentation describing how to find binary
dependencies. (#883).

Miscellaneous

 Added a show_graph() method to render a Pipeline as an image (#836).


 Adds subtest() decorator for creating subtests
without nose_parameterized.expand() which bloats the test output (#833).
 Limits timer report in test output to 15 longest tests (#838).
 Treasury and benchmark downloads will now wait up to an hour to download
again if data returned from a remote source does not extend to the date expected.
(#841).
 Added a tool to downgrade the assets db to previous versions (#941).

Release 0.8.3
Release 0.8.3
:

Date: November 6, 2015


Note

We advanced the version to 0.8.3 to fix a source distribution issue with pypi. There are
no code changes in this version.

Release 0.8.0
Release
0.8.0
:

Date: November 6, 2015

Highlights

 New documentation system with a new website at zipline.io


 Major performance enhancements.
 Dynamic history.
 New user defined method: before_trading_start.
 New api function: schedule_function().
 New api function: get_environment().
 New api function: set_max_leverage().
 New api function: set_do_not_order_list().
 Pipeline API.
 Support for trading futures.

Enhancements

 Account object: Adds an account object to context to track information about the
trading account. Example:
 context.account.settled_cash

Returns the settled cash value that is stored on the account object. This value is
updated accordingly as the algorithm is run (#396).
 HistoryContainer can now grow dynamically. Calls to history() will now be able to

increase the size or change the shape of the history container to be able to service
the call. add_history() now acts as a preformance hint to pre-allocate sufficient space
in the container. This change is backwards compatible with history, all existing
algorithms should continue to work as intended (#412).
 Simple transforms ported from quantopian and use history. SIDData now has
methods for:

 stddev

 mavg

 vwap

 returns

These methods, except for returns, accept a number of days. If you are running with
minute data, then this will calculate the number of minutes in those days, accounting
for early closes and the current time and apply the transform over the set of
minutes. returns takes no parameters and will return the daily returns of the given
asset. Example:
data[security].stddev(3)

(#429).
o New fields in Performance Period. Performance Period has new fields
accessible in return value of to_dict: - gross leverage - net leverage - short exposure
- long exposure - shorts count - longs count (#464).
o Allow order_percent() to work with various market values (by Jeremiah
Lowin).

Currently, order_percent() and order_target_percent() both operate as a percentage


ofself.portfolio.portfolio_value. This PR lets them operate as percentages of other
important MVs. Also adds context.get_market_value(), which enables this functionality.
For example:
# this is how it works today (and this still works)
# put 50% of my portfolio in AAPL
order_percent('AAPL', 0.5)
# note that if this were a fully invested portfolio, it would become 150% levered.

# take half of my available cash and buy AAPL


order_percent('AAPL', 0.5, percent_of='cash')

# rebalance my short position, as a percentage of my current short


book_target_percent('MSFT', 0.1, percent_of='shorts')

# rebalance within a custom group of stocks


tech_stocks = ('AAPL', 'MSFT', 'GOOGL')
tech_filter = lambda p: p.sid in tech_stocks
for stock in tech_stocks:
order_target_percent(stock, 1/3, percent_of_fn=tech_filter)

(#477).
o Command line option to for printing algo to stdout (by Andrea D’Amore)
(#545).
o New user defined function before_trading_start. This function can be
overridden by the user to be called once before the market opens every day (#389).
o New api function schedule_function(). This function allows the user to
schedule a function to be called based on more complicated rules about the date
and time. For example, call the function 15 minutes before market close respecting
early closes (#411).
o New api function set_do_not_order_list(). This function accepts a list of
assets and adds a trading guard that prevents the algorithm from trading them. Adds
a list point in time list of leveraged ETFs that people may want to mark as ‘do not
trade’ (#478).
o Adds a class for representing securities. order() and other order functions
now require an instance of Security instead of an int or string (#520).
o Generalize the Security class to Asset. This is in preperation of adding
support for other asset types (#535).
o New api function get_environment(). This function by default returns the
string 'zipline'. This is used so that algorithms can have different behavior on
Quantopian and local zipline (#384).
o Extends get_environment() to expose more of the environment to the
algorithm. The function now accepts an argument that is the field to return. By
default, this is 'platform' which returns the old value of 'zipline' but the following new
fields can be requested:

 ''arena': Is this live trading or backtesting?


 'data_frequency': Is this minute mode or daily mode?

 'start': Simulation start date.

 'end': Simulation end date.

 'capital_base': The starting capital for the simulation.

 'platform': The platform that the algorithm is running on.

 '*': A dictionary containing all of these fields.

(#449).
o New api function set_max_leveraged(). This method adds a trading guard that
prevents your algorithm from over leveraging itself (#552).
Experimental Features
Warning

Experimental features are subject to change.

 Adds new Pipeline API. The pipeline API is a high-level declarative API for
representing trailing window computations on large datasets (#630).
 Adds support for futures trading (#637).
 Adds Pipeline loader for blaze expressions. This allows users to pull data from
any format blaze understands and use it in the Pipeline API. (#775).

Bug Fixes

 Fix a bug where the reported returns could sharply dip for random periods of time
(#378).
 Fix a bug that prevented debuggers from resolving the algorithm file (#431).
 Properly forward arguments to user defined initialize function (#687).
 Fix a bug that would cause treasury data to be redownloaded every backtest
between midnight EST and the time when the treasury data was available (#793).
 Fix a bug that would cause the user defined analyze function to not be called if it
was passed as a keyword argument to TradingAlgorithm (#819).

Performance

 Major performance enhancements to history (by Dale Jung) (#488).

Maintenance and Refactorings

 Remove simple transform code. These are available as methods


of SIDData (#550).

Build

None

Documentation
 Switched to sphinx for the documentation (#816).

Release 0.7.0
Release
0.7.0
:

Date: July 25, 2014

Highlights

 Command line interface to run algorithms directly.


 IPython Magic %%zipline that runs algorithm defined in an IPython notebook cell.
 API methods for building safeguards against runaway ordering and undesired
short positions.
 New history() function to get a moving DataFrame of past market data (replaces
BatchTransform).
 A new beginner tutorial.

Enhancements

 CLI: Adds a CLI and IPython magic for zipline. Example:


 python run_algo.py -f dual_moving_avg.py --symbols AAPL --start 2011-1-1 --end 2012-1-1
-o dma.pickle

Grabs the data from yahoo finance, runs the file dual_moving_avg.py (and looks
fordual_moving_avg_analyze.py which, if found, will be executed after the algorithm has
been run), and outputs the perf DataFrame to dma.pickle (#325).
 IPython magic command (at the top of an IPython notebook cell). Example:
 %%zipline --symbols AAPL --start 2011-1-1 --end 2012-1-1 -o perf

Does the same as above except instead of executing the file looks for the algorithm
in the cell and instead of outputting the perf df to a file, creates a variable in the
namespace called perf (#325).
 Adds Trading Controls to the algorithm API.

The following functions are now available on TradingAlgorithm and for algo scripts:

set_max_order_size(self, sid=None, max_shares=None, max_notional=None) Set
a limit on the
absolute magnitude, in shares and/or total dollar value, of any single order placed by
this algorithm for a given sid. If sid is None, then the rule is applied to any order
placed by the algorithm. Example:
def initialize(context):
# Algorithm will raise an exception if we attempt to place an
# order which would cause us to hold more than 10 shares
# or 1000 dollars worth of sid(24).
set_max_order_size(sid(24), max_shares=10, max_notional=1000.0)

set_max_position_size(self, sid=None, max_shares=None, max_notional=None) -Seta limit on
the absolute magnitude, in either shares or dollar value, of any position held by the
algorithm for a given sid. If sid is None, then the rule is applied to any position held
by the algorithm. Example:
def initialize(context):
# Algorithm will raise an exception if we attempt to order more than
# 10 shares or 1000 dollars worth of sid(24) in a single order.
set_max_order_size(sid(24), max_shares=10, max_notional=1000.0)

``set_max_order_count(self, max_count)``
Set a limit on the number of orders that can be placed by the algorithm in
a single trading day.
Example:

def initialize(context):
# Algorithm will raise an exception if more than 50 orders are placed in a day.
set_max_order_count(50)

set_long_only(self) Set a rule specifying that the algorithm may not hold short
positions. Example:
def initialize(context):
# Algorithm will raise an exception if it attempts to place
# an order that would cause it to hold a short position.
set_long_only()

(#329).
 Adds an all_api_methods classmethod on TradingAlgorithm that returns a list of
allTradingAlgorithm API methods (#333).
 Expanded record() functionality for dynamic naming. The record() function can
now take positional args before the kwargs. All original usage and functionality is the
same, but now these extra usages will work:
 name = 'Dynamically_Generated_String'
 record( name, value, ... )
 record( name, value1, 'name2', value2, name3=value3, name4=value4 )

The requirements are simply that the poritional args occur only before the kwargs
(#355).
 history() has been ported from Quantopian to Zipline and provides moving
window of market data. history() replaces BatchTransform. It is faster, works for
minute level data and has a superior interface. To use it, call add_history() inside
of initialize() and then receive a pandasDataFrame by calling history() from
inside handle_data(). Check out the tutorial and an example. (#345 and #357).
 history() now supports 1m window lengths (#345).
Bug Fixes

 Fix alignment of trading days and open and closes in trading environment (#331).
 RollingPanel fix when adding/dropping new fields (#349).

Performance

None

Maintenance and Refactorings

 Removed undocumented and untested HDF5 and CSV data sources (#267).
 Refactor sim_params (#352).
 Refactoring of history (#340).

Build

 The following dependencies have been updated (zipline might work with other
versions too):
 -pytz==2013.9
 +pytz==2014.4
 +numpy==1.8.1
 -numpy==1.8.0
 +scipy==0.12.0
 +patsy==0.2.1
 +statsmodels==0.5.0
 -six==1.5.2
 +six==1.6.1
 -Cython==0.20
 +Cython==0.20.1
 -TA-Lib==0.4.8
 +--allow-external TA-Lib --allow-unverified TA-Lib TA-Lib==0.4.8
 -requests==2.2.0
 +requests==2.3.0
 -nose==1.3.0
 +nose==1.3.3
 -xlrd==0.9.2
 +xlrd==0.9.3
 -pep8==1.4.6
 +pep8==1.5.7
 -pyflakes==0.7.3
 -pip-tools==0.3.4
 +pyflakes==0.8.1`
 -scipy==0.13.2
 -tornado==3.2
 -pyparsing==2.0.1
 -patsy==0.2.1
 -statsmodels==0.4.3
 +tornado==3.2.1
 +pyparsing==2.0.2
 -Markdown==2.3.1
 +Markdown==2.4.1

Contributors

The following people have contributed to this release, ordered by numbers of commit:

38 Scott Sanderson
29 Thomas Wiecki
26 Eddie Hebert
6 Delaney Granizo-Mackenzie
3 David Edwards
3 Richard Frank
2 Jonathan Kamens
1 Pankaj Garg
1 Tony Lambiris
1 fawce

Release 0.6.1
Release
0.6.1
:

Date: April 23, 2014

Highlights

 Major fixes to risk calculations, see Bug Fixes section.


 Port of history() function, see Enhancements section
 Start of support for Quantopian algorithm script-syntax, see ENH section.
 conda package manager support, see Build section.

Enhancements

 Always process new orders i.e. on bars where handle_data isn’t called, but there is
‘clock’ data e.g. a consistent benchmark, process orders.
 Empty positions are now filtered from the portfolio container. To help prevent
algorithms from operating on positions that are not in the existing universe of stocks.
Formerly, iterating over positions would return positions for stocks which had zero
shares held. (Where an explicit check in algorithm code for pos.amount != 0 could
prevent from using a non-existent position.)
 Add trading calendar for BMF&Bovespa.
 Add beginning of algo script support.
 Starts on the path of parity with the script syntax in Quantopian’s IDE
on https://quantopian.comExample:
 from datetime import datetime import pytz
 from zipline import TradingAlgorithm
 from zipline.utils.factory import load_from_yahoo

 from zipline.api import order

 def initialize(context):
 context.test = 10

 def handle_date(context, data):
 order('AAPL', 10)
 print(context.test)

 if __name__ == '__main__':
 import pylab as pl
 start = datetime(2008, 1, 1, 0, 0, 0, 0, pytz.utc)
 end = datetime(2010, 1, 1, 0, 0, 0, 0, pytz.utc)
 data = load_from_yahoo(
 stocks=['AAPL'],
 indexes={},
 start=start,
 end=end)
 data = data.dropna()
 algo = TradingAlgorithm(
 initialize=initialize,
 handle_data=handle_date)
 results = algo.run(data)
 results.portfolio_value.plot()
 pl.show()

 Add HDF5 and CSV sources.


 Limit handle_data to times with market data. To prevent cases where custom data
types had unaligned timestamps, only call handle_data when market data passes
through. Custom data that comes before market data will still update the data bar.
But the handling of that data will only be done when there is actionable market data.
 Extended commission PerShare method to allow a minimum cost per trade.
 Add symbol api function A symbol() lookup feature was added to Quantopian. By
adding the same API function to zipline we can make copy&pasting of a Zipline algo
to Quantopian easier.
 Add simulated random trade source. Added a new data source that emits events
with certain user-specified frequency (minute or daily). This allows users to backtest
and debug an algorithm in minute mode to provide a cleaner path towards
Quantopian.
 Remove dependency on benchmark for trading day calendar. Instead of the
benchmarks’ index, the trading calendar is now used to populate the environment’s
trading days. Remove extra_datefield, since unlike the benchmarks list, the trading
calendar can generate future dates, so dates for current day trading do not need to
be appended. Motivations:

 The source for the open and close/early close calendar and the trading
day calendar is now the same, which should help prevent potential issues due to
misalignment.
 Allows configurations where the benchmark is provided as a generator
based data source to need to supply a second benchmark list just to populate
dates.
o Port history() API method from Quantopian. Opens the core of
the history() function that was previously only available on the Quantopian platform.

The history method is analoguous to the batch_transform function/decorator, but with a


hopefully more precise specification of the frequency and period of the previous bar
data that is captured. Example usage:
from zipline.api import history, add_history

def initialize(context):
add_history(bar_count=2, frequency='1d', field='price')

def handle_data(context, data):


prices = history(bar_count=2, frequency='1d', field='price')
context.last_prices = prices
N.B. this version of history lacks the backfilling capability that allows the return a full
DataFrame on the first bar.
Bug Fixes

 Adjust benchmark events to match market hours (#241). Previously benchmark


events were emitted at 0:00 on the day the benchmark related to: in ‘minute’
emission mode this meant that the benchmarks were emitted before any intra-day
trades were processed.
 Ensure perf stats are generated for all days When running with minutely
emissions the simulator would report to the user that it simulated ‘n - 1’ days (where
n is the number of days specified in the simulation params). Now the correct number
of trading days are reported as being simulated.
 Fix repr for cumulative risk metrics. The __repr__ for RiskMetricsCumulative was
referring to an older structure of the class, causing an exception when printed. Also,
now prints the last values in the metrics DataFrame.
 Prevent minute emission from crashing at end of available data. The next day
calculation was causing an error when a minute emission algorithm reached the end
of available data. Instead of a generic exception when available data is reached,
raise and catch a named exception so that the tradesimulation loop can skip over,
since the next market close is not needed at the end.
 Fix pandas indexing in trading calendar. This could alternatively be filed under
Performance. Index using loc instead of the inefficient index-ing of day, then time.
 Prevent crash in vwap transform due to non-existent member. The
WrongDataForTransform was referencing a self.fields member, which did not exist.
Add a self.fields member set to price andvolume and use it to iterate over during the
check.
 Fix max drawdown calculation. The input into max drawdown was incorrect,
causing the bad results. i.e. the compounded_log_returns were not values representative
of the algorithms total return at a given time, though calculate_max_drawdown was
treating the values as if they were. Instead, the algorithm_period_returns series is now
used, which does provide the total return.
 Fix cost basis calculation. Cost basis calculation now takes direction of txn into
account. Closing a long position or covering a short shouldn’t affect the cost basis.
 Fix floating point error in order(). Where order amounts that were near an integer
could accidentally be floored or ceilinged (depending on being postive or negative)
to the wrong integer. e.g. an amount stored internally as -27.99999 was converted to
-27 instead of -28.
 Update perf period state when positions are changed by splits.
Otherwise, self._position_amountswill be out of sync with position.amount, etc.
 Fix misalignment of downside series calc when using exact dates. An oddity that
was exposed while working on making the return series passed to the risk module
more exact, the series comparison between the returns and mean returns was
unbalanced, because the mean returns were not masked down to the downside data
points; however, in most, if not all cases this was papered over by the call
to .valid() which was removed in this change set.
 Check that self.logger exists before using it. self.logger is initialized as None and
there is no guarantee that users have set it, so check that it exists before trying to
pass messages to it.
 Prevent out of sync market closes in performance tracker. In situations where the
performance tracker has been reset or patched to handle state juggling with
warming up live data, themarket_close member of the performance tracker could end
up out of sync with the current algo time as determined by the performance tracker.
The symptom was dividends never triggering, because the end of day checks would
not match the current time. Fix by having the tradesimulation loop be responsible, in
minute/minute mode, for advancing the market close and passing that value to the
performance tracker, instead of having the market close advanced by the
performance tracker as well.
 Fix numerous cumulative and period risk calculations. The calculations that are
expected to change are:

 cumulative.beta

 cumulative.alpha

 cumulative.information

 cumulative.sharpe

 period.sortino

How Risk Calculations Are Changing Risk Fixes for Both Period and Cumulative

Downside Risk

Use sample instead of population for standard deviation.


Add a rounding factor, so that if the two values are close for a given dt, that they do
not count as a downside value, which would throw off the denominator of the
standard deviation of the downside diffs.

Standard Deviation Type

Across the board the standard deviation has been standardized to using a ‘sample’
calculation, whereas before cumulative risk was mostly using ‘population’.
Using ddof=1 with np.stdcalculates as if the values are a sample.

Cumulative Risk Fixes

Beta

Use the daily algorithm returns and benchmarks instead of annualized mean returns.

Volatility

Use sample instead of population with standard deviation.

The volatility is an input to other calculations so this change affects Sharpe and
Information ratio calculations.

Information Ratio

The benchmark returns input is changed from annualized benchmark returns to the
annualized mean returns.

Alpha

The benchmark returns input is changed from annualized benchmark returns to the
annualized mean returns.

Period Risk Fixes

Sortino

Now uses the downside risk of the daily return vs. the mean algorithm returns for the
minimum acceptable return instead of the treasury return.

The above required adding the calculation of the mean algorithm returns for period
risk.
Also, uses algorithm_period_returns and tresaury_period_return as the cumulative
Sortino does, instead of using algorithm returns for both inputs into the Sortino
calculation.
Performance

 Removed alias_dt transform in favor of property on SIDData. Adding a copy of


the Event’s dt field as datetime via the alias_dt generator, so that the API was
forgiving and allowed both datetime and dt on a SIDData object, was creating
noticeable overhead, even on an noop algorithms. Instead of incurring the cost of
copying the datetime value and assigning it to the Event object on every event that is
passed through the system, add a property to SIDData which acts as an
alias datetime to dt. Eventually support for data['foo'].datetime may be removed, and
could be considered deprecated.
 Remove the drop of ‘null return’ from cumulative returns. The check of existence
of the null return key, and the drop of said return on every single bar was adding
unneeded CPU time when an algorithm was run with minute emissions. Instead, add
the 0.0 return with an index of the trading day before the start date. The removal of
the null return was mainly in place so that the period calculation was not crashing on
a non-date index value; with the index as a date, the period return can also
approximate volatility (even though the that volatility has high noise-to-signal
strength because it uses only two values as an input.)

Maintenance and Refactorings

 Allow sim_params to provide data frequency for the algorithm. In the case


that data_frequency of the algorithm is None, allow the sim_params to provide
the data_frequency.

Also, defer to the algorithms data frequency, if provided.


Build

 Added support for building and releasing via conda For those who prefer building
withhttp://conda.pydata.org/ to compiling locally with pip. The following should install
Zipline on many systems.
 conda install -c quantopian zipline

Contributors
The following people have contributed to this release, ordered by numbers of commit:

49 Eddie Hebert
28 Thomas Wiecki
11 Richard Frank
2 Jamie Kirkpatrick
2 Jeremiah Lowin
1 Colin Alexander
1 Michael Schatzow
1 Moises Trovo
1 Suminda Dharmasena

Next  Previous

© Copyright 2016, Quantopian Inc..


Built with Sphinx using a theme provided by Read the Docs.

You might also like