Four short links
- We Are Not in a Simulation (Cosmos Magazine) — Ringel and Kovrizhi showed that attempts to use quantum Monte Carlo to model systems exhibiting anomalies, such as the quantum Hall effect, will always become unworkable. They discovered that the complexity of the simulation increased exponentially with the number of particles being simulated. If the complexity grew linearly with the number of particles being simulated, then doubling the number of partices would mean doubling the computing power required. If, however, the complexity grows on an exponential scale—where the amount of computing power has to double every time a single particle is added—then the task quickly becomes impossible. Whew, I can finally sleep at night. (via Slashdot)
- TFX: A TensorFlow-based Production-Scale Machine Learning Platform — best description is from The Morning Paper. The new baseline: so far, you’ve embraced automated testing, continuous integration, continuous delivery, perhaps continuous deployment, and you have the sophistication to rollout new changes in a gradual manner, monitor behaviour, and stop or rollback when a problem is detected. On top of this, you’ve put in place a sophisticated metrics system and a continuous experimentation platform. Due to the increasing complexity of systems, you might also need to extend this to a general purpose black-box optimization platform. But you’re still not done yet! All those machine learning models you’ve been optimizing need to be trained, validated, and served somehow. You need a machine learning platform. That’s the topic of today’s paper choice, which describes the machine learning platform inside Google, TFX.
- redash — GPLv3 dashboard, connects to RedShift, ElasticSearch, BigQuery, MongoDB, MySQL, PostgreSQL.
- Wikipedia Graph Mining: Dynamic Structure of Collective Memory — they use the changing popularity of pages to identify significant events, even separating predictable events like tournaments from unpredictable ones like tragedies.
Powered by WPeMatico