Mar 27, 2017

I think this claim would be more credible if there were the any other characteristics of a religion present. Namely: Doctrine, rituals, totems, prayer, and above all it would require that there is some credulity to a higher power/state such as heaven/nirvana that will never be seen by humans.

I'd be interested to see some of those as I haven't yet and I've been in the community for a while.

What I will grant is that, there are those who speak in the same structure as religious people. "In the future there will be no poverty because machines will make everything we need for free" sounds a little too close to "In heaven, you can eat all you want and never get full!"

However similar they sound though, they have radically different theoretical roots.

In fact though there is plenty of hard evidence for progress on all of the accounts you mentioned:

AGI: https://arxiv.org/abs/1701.08734

Immortality: http://www.sciencealert.com/scientists-have-successfully-rev...

mind uploading: Not sure about Whole Brain Emulation progress right off the top of my head.

genetic enhancement: http://www.sciencealert.com/scientists-reverse-sickle-cell-d...

Are they solved? Not by a long shot. Do we know when they will be? Of course not. There is progress though.

As far as I know, nobody is making hard progress on when Jesus is coming back.

Mar 20, 2017

Maybe of interest:

https://arxiv.org/abs/1701.08734

https://arxiv.org/abs/1703.00837

Feb 18, 2017

PathNet paper: https://arxiv.org/abs/1701.08734

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).