Striving Towards the Singularity


Taken from medium.com/@jasikpark

For the past year, I have delved into learning all I can know about machine learning and understanding artificial intelligence…

All of this makes no sense, but I’m going to write it down anyways…

The neural network should have a short-term memory that is compressed by the neural network and decompressed by another neural network. When “sleeping” (training) the data is successively decompressed for training data. This could be achieved using a recurrent neural network similar to Google’s Full Resolution Image Compression with Recurrent Neural Networks

The neural network should be a recurrent-neural-network that generates successively lower order neural networks until it is trained on the actual training data. Similar to Neural Architecture Search with Reinforcement Learning. The neural networks will be trained by using the method in Decoupled Neural Interfaces using Synthetic Gradients in which the neural networks are proactively trained whenever there extra processing cycles.

Recursive Decomposition for Nonconvex Optimization

Towards Deep Symbolic Reinforcement Learning