“Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” - Stephen Hawking
Films and novels warning us of the potential harms artificial intelligence are certainly not lacking. Some of the leading technologists and scientists share the same concern. Yet AI will no doubt bring great advances to humanity.
Is it responsible for us to create artificial intelligence? What might be the philosophical implications? Will this be the beginning of a much feared dystopia? What will the relationship between humans and intelligent machines be?