Description |
1 online resource (358 p.) |
Series |
Chapman and Hall/CRC Financial Mathematics Series |
|
Chapman & Hall/CRC financial mathematics series.
|
Contents |
Cover -- Half Title -- Series Page -- Title Page -- Copyright Page -- Dedication -- Contents -- Preface -- I. Introduction -- 1. Notations and data -- 1.1. Notations -- 1.2. Dataset -- 2. Introduction -- 2.1. Context -- 2.2. Portfolio construction: the workflow -- 2.3. Machine learning is no magic wand -- 3. Factor investing and asset pricing anomalies -- 3.1. Introduction -- 3.2. Detecting anomalies -- 3.2.1. Challenges -- 3.2.2. Simple portfolio sorts -- 3.2.3. Factors -- 3.2.4. Fama-MacBeth regressions -- 3.2.5. Factor competition -- 3.2.6. Advanced techniques |
|
3.3. Factors or characteristics? -- 3.4. Hot topics: momentum, timing, and ESG -- 3.4.1. Factor momentum -- 3.4.2. Factor timing -- 3.4.3. The green factors -- 3.5. The links with machine learning -- 3.5.1. Short list of recent references -- 3.5.2. Explicit connections with asset pricing models -- 3.6. Coding exercises -- 4. Data preprocessing -- 4.1. Know your data -- 4.2. Missing data -- 4.3. Outlier detection -- 4.4. Feature engineering -- 4.4.1. Feature selection -- 4.4.2. Scaling the predictors -- 4.5. Labelling -- 4.5.1. Simple labels -- 4.5.2. Categorical labels |
|
4.5.3. The triple barrier method -- 4.5.4. Filtering the sample -- 4.5.5. Return horizons -- 4.6. Handling persistence -- 4.7. Extensions -- 4.7.1. Transforming features -- 4.7.2. Macroeconomic variables -- 4.7.3. Active learning -- 4.8. Additional code and results -- 4.8.1. Impact of rescaling: graphical representation -- 4.8.2. Impact of rescaling: toy example -- 4.9. Coding exercises -- II. Common supervised algorithms -- 5. Penalized regressions and sparse hedging for minimum variance portfolios -- 5.1. Penalized regressions -- 5.1.1. Simple regressions -- 5.1.2. Forms of penalizations |
|
5.1.3. Illustrations -- 5.2. Sparse hedging for minimum variance portfolios -- 5.2.1. Presentation and derivations -- 5.2.2. Example -- 5.3. Predictive regressions -- 5.3.1. Literature review and principle -- 5.3.2. Code and results -- 5.4. Coding exercise -- 6. Tree-based methods -- 6.1. Simple trees -- 6.1.1. Principle -- 6.1.2. Further details on classification -- 6.1.3. Pruning criteria -- 6.1.4. Code and interpretation -- 6.2. Random forests -- 6.2.1. Principle -- 6.2.2. Code and results -- 6.3. Boosted trees: Adaboost -- 6.3.1. Methodology -- 6.3.2. Illustration |
|
6.4. Boosted trees: extreme gradient boosting -- 6.4.1. Managing loss -- 6.4.2. Penalization -- 6.4.3. Aggregation -- 6.4.4. Tree structure -- 6.4.5. Extensions -- 6.4.6. Code and results -- 6.4.7. Instance weighting -- 6.5. Discussion -- 6.6. Coding exercises -- 7. Neural networks -- 7.1. The original perceptron -- 7.2. Multilayer perceptron -- 7.2.1. Introduction and notations -- 7.2.2. Universal approximation -- 7.2.3. Learning via back-propagation -- 7.2.4. Further details on classification -- 7.3. How deep we should go and other practical issues -- 7.3.1. Architectural choices |
Summary |
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection |
Notes |
Description based upon print version of record |
|
7.3.2. Frequency of weight updates and learning duration |
Form |
Electronic book
|
Author |
Guida, Tony
|
ISBN |
9781000912807 |
|
1000912809 |
|