動画検索
関連広告
検索結果
Howard Morgan Co-Founder of Renaissance Technologies
Olaf Carlson-Wee Founder of Polychain Capital, formerly Coinbase
Norman Packard Founder of Prediction Company
Geoff Bradway VP of Engineering at Numerai, formerly Google DeepMind
Joey Krug Founder of Augur, Thiel Fellow
Yunus Saatchi Permutation Ventures, formerly Vicarious
Peter Diamandis Founder of X-Prize, Co-Founder of Singularity University
Richard Craib Founder of Numerai
Review of last Quant Club
Revisiting leaf weights and playing with shrinkage
Top bottom performance after applying shrinkage
Using number of points
Feeding tree weights to a neural network
Fitting a linear model for each era
What is tricky about Numerai problem—functional relationships can change over time
Learn a nonlinear mapping for each era
Collect coefficients and perform an embedding
Finding a way to extrapolate from embedding space using Taylor series
Can you invert these embeddings back to the weights?
Build your own inversion using kernel ridge regression
Looking at how correlated the inversions are to the last coefficient, sanity checks
Q&A: Once you invert and try to go back, where would its location be? Checking using history
Q&A: Where is the notebook available?
Q&A: Tree based models being trained in eras matches example?
Q&A: Any interesting insight from the magnitude of the move between eras on the embedding?
Not limited to a 3D embedding, other visual options
Other kernels
Q&A: Discussion on various approaches, such as taking the live data and altering, comparing two versions of straightforward vs. live data based models
Q&A: Discussion on goals of synthetic data–is it possible to fill out with synthetic data?
Q&A: Discussion on taking live era and comparing to historical eras
Q&A: Discussion on market predictions reacting against its own history
The fine structure that you say that can’t be captured, does it just go away if you have a lot more parameters for the neural network?
Do you think feature selection is a very important thing to do well?
What is your best way to do feature selection? What do you think is sort of the state of the art of feature selection on Numerai?
In this age of TC, the question is what is good performance?
Are all features sort of inherently relative anyway?
I know people now have access to past meta model scores and I wanted to ask if people have had luck in using them in trying to pursue TC-based models and been able to make more sense out of TC now that they have access to past meta model scores.
What would be a perfect TC
Is there a solid suggestion there? Let’s say I build a model today, what can I do to see if it’s going to get TC, save submitting it and waiting? If I look at those meta model scores, and look at the residuals, and then the core leftover is good, would I then expect to get TC?
Do you think that if you are not getting good core, even negative core, that you’re just not going to get TC, or whatever TC you are getting is spurious and random? Or is the good core correlated with good TC just a subset of the good TC space?
Well I mean nomi, moving from a flat target to nomi Gaussian did that to some extent, right? So now you’re talking about doing that to our side as well?
Do you think that’s still a problem, maybe? Just a high dependence, if this feature, even if it’s non-linear, it’s just not behaving this week then there goes your model, in terms of robustness?
Aliens
New feature set, trying to take advantage of more time series information
Reasons these features haven’t been released yet, what we’ve learned from tests
Main problem to solve: what pure correlation against our main target is missing
New targets
Q&A: Is it correct that new targets are neutral to new factors but TC is still calculated with old risk factors? If so, how could current TC benefit from training on new targets?
Discussion of previous iterations: originality score, MMC, etc.
Questions of TC and staking in discord
What is the best way to predict TC?
Discussion on current research: new criteria for splitting trees
How have people used the stake-weighted meta model?
Wigglemuse input: experience making a set of models using historics
Discussion on user models: which have done better during the burn period?
Discussion on user models: how well can you reconstruct the meta model?
Q&A: Any word on LLMs being used to generate features?
Q&A: Wouldn’t you need twenty years time-stamped news, is that feasible?
Q&A: How much value added comes from LLMs over basic statistics?
Chat response: The chatgpt for stocks paper says that vendor provided sentiment analysis underperforms compared to chatgpt.
求める情報が見つからない場合は、キーワードや指定した条件を変えてみてください。