Nov 14, 2018

Sure.

Why is the title of the article so disingenuous?

What design do people think they see in nature?

By what method can I, without referring to someone else's authority, decide that a particular part of nature is "designed"? Without an explicit criteria for "design", the whole thing is just arguing over opinions about what is designed and what isn't. That is, it would become just like Behe's "irreducible complexity". No definite criteria for "irreducible" and no measurement for "complexity".

While you're at it, point me to some PDFs of refereed papers written by Dembski. I'm a smart guy, I can read primary sources. The article doesn't point to any, and a quick duckduckgo for "dembski no free lunch" gets me web pages that provide a critique of Dembski's book (and not on information theoretic grounds, either), and his book on Amazon, but nothing I can print and read and digest. No PDFs.

How does Dembski wave off actual examples of artificial evolution? Tierra (http://life.ou.edu/tierra/) seems to evolve little digital pseudo-life that does a specific task. Here's another example: http://www.genetic-programming.com/jkpdf/alife1990.pdf There are more "folksy" reports, too: https://www.allegro.cc/forums/thread/293256 . There's an entire field, genetic algorithms/genetic programming, that actually ends up at least some of the time, evolving a solution to the problem. Adrian Thompson's FPGA experiment (https://www.researchgate.net/profile/Adrian_Thompson5/public...) is pretty famous. Here's a distillation of the above paper with a few interesting editorial comments: https://static.aminer.org/pdf/PDF/000/308/779/an_evolved_cir...

How does Dembski explain how every system that attempts to "evolve" something by successive generations that inherit and have minor changes (https://arxiv.org/abs/1803.03453, which is a summary of many experiments) end up doing something that's undeniably evolution?

Oct 07, 2018

I was forwarded this discussion by a friend who’s familiar with my work and I’ve really been enjoying the posts. I think about ethics and AI a lot, and couldn’t help but want to contribute a few thoughts here. So here it goes, my first post on HN..

My main advice is beware of AI’s surprising creativity and proactively work to insure it stays aligned with human interest.

There was a fascinating crowd-sourced paper published this year that shares anecdotes about unexpected adaptations encountered by researchers working in artificial life and evolutionary computation[1]. These are the sort of stories that can be funny in one light (à la “taught an AI to fish and it figured out how to drain the lake”), and doomsdayish in another (“it drained the lake”).

The authors concluded that there is “potential for perverse outcomes from optimizing reward functions that appear sensible.” That’s researcher for ¯\_(ツ)_/¯ ...as Tad Friend wrote in his excellent piece in the New Yorker on this topic[2].

In other words, humans can’t safeguard AI systems solely by defining what they believe to be sensible reward functions.

Reward functions, regardless of whether they are sensible to humans or not, critically need to be mediated by additional regulatory mechanisms, like hard-set non-goals that aren’t just penalty terms relative to a specific reward function. The best non-goals are unequivocally defined and measurable against intermediates that are produced in the reward function optimization process. When done right, this sort of framework allows maladaptive processes to be detected reliably and effective interventions executed.

Tad makes two other points that I think are worth noting in this discussion: #1. “It will be much easier and cheaper to build the first A.G.I, than to build the first safe A.G.I.” #2. “Lacking human intuition, A.G.I. can do us harm in the effort to oblige us”

Given #1, when investing in AI companies, if you aim to be “on the more activist end of the spectrum” you’ll need to spend more money relative to market in order to support ethically responsible AI R&D programs, because they will necessarily be harder and more expensive than the irresponsible ones. Assuming you’re investing in AI companies for their products, and not as pure technology plays, this is simply a reality: the core functionalities needed for your portfolio company to sell product X will always be cheaper to develop than the core functionalities needed for your portfolio company to sell product X within a safe, secure framework.

There will be no point for your firm to have codified principles without also having the fortitude to support your AI companies, financially and otherwise, with development processes that are harder and more expensive precisely because they’re more ethical. Many of these costs are absorbed in getting architectures and system designs right, which serve the product anyways, but big costs also come from running unique tests that would be erroneous if ethics weren’t in consideration.

Before thinking about having investees agree to your ethical principles around AI, it may be good for your firm to think about whether you’re willing to pay more for those principles to be lived up to. If two identical companies pitch you with identical AI products, but one plans to take an extra 6 months and $10M to safeguard their technology before launching, while the other intends to capture 8% of the market in that time, who will you fund?

Point #2 relates closely to “perverse outcomes.” In other words, human harm can be unintentional by AIs. Aside from weaponized-AI that does harm intentionally and raises its own separate ethical dilemmas, daily life AI can do damage in a great number of ways without intending to or even knowing it.

The IEEE Standards Association together with the MIT Media Lab recently launched a global Council on Extended Intelligence[3] which addresses many of these issues. Joichi Ito, Director of the MIT Media Lab and a person on the more activist end of the spectrum himself, stresses that: “Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.”

Disclaimer: I’m a co-founder of Arctop.

[1] https://arxiv.org/abs/1803.03453 [2] https://www.newyorker.com/magazine/2018/05/14/how-frightened... [3] https://globalcxi.org/#vision

Mar 22, 2018

Here's a recent survey / observational science paper by some prominent "neuroevolution" / A-Life researchers. https://arxiv.org/abs/1803.03453. I found this refreshing because it's rare that science papers talk about the debugging and experimental process and debugging journeys underlying this research.