Advice on transitioning to AI Career : artificial

By Prosyscom
In March 4, 2018


whether the path I am considering is feasible.

Any path is feasible if you are determined – I don’t say this flippantly, I mean it. The reason we feel otherwise is that the exact course of our chosen path(s) rarely coincides with our preconceived notions – we thought the path would zig when it actually zags, and vice-versa.

The kind of questions I am interested in center around the mathematical and algorithmic nature of intelligent cognition – is thinking model-based, i.e. constructed from a recursive update of mental models based on discrepancy between predictions and observed reality, or is thinking model-free?

It depends on the level of abstraction. I recommend you to read Eric Beinhocker’s book The Origin of Wealth – in particular, Beinhocker’s use of the term “schema reader” is precisely analogous to the use of the term “model” in AI. The difference is that schema readers are a superset of models – in fact, the schema reader is so general that it includes you and I (which is the point of using that concept).

The AI community, however, tends to be machine oriented. So, the emphasis is on model-building.

Is thinking/decision-making fruitfully understood as optimising some functions, and if so what kinds of functions?

IMO, there are two big headings of AI that divide on this point. The symbolic/algorithmic head says “Yes” (John McCarthy would fall here). This is where you get machine learning, deep learning, and so on. The biological/natural head says “Kind of, but not really” (Marvin Minsky would fall here). This is where the robotics people and the VR people are currently working. There is, of course, lots of crossover between the two ways of thinking but they are very distinct in their overall approaches.

How is it that concept acquisition is possible, and how do agents learn from small/no examples/meagre verbal instructions. How do you get semantics out of lists of the unstructured lists of numbers that form the representational bases for connectionist networks.

Well, you’ve basically listed the holy grail problems in the field of AI. Solve them, and you will be an immortal, remembered for the rest of history along with Pythagoras or Archimedes. Faced with the immensity of the problem of replicating the human brain from scratch (it turns out that, during billions of years of biological evolution, Nature was actually doing something), AI has generally turned to more modest goals – image-recognition, pattern-classification, routing&planning, speech-to-text, and so on.

I’m (a) averse to experimental work on people [building AI is fun for me tho]

Every time you interact with a customer, that’s an experiment. If you’re Mark Zuckerberg and you’re building an AI to make friend recommendations, you’re experimenting on people. In the field of AI, this is generally what we mean by “experimenting on people”, not implanting electrodes into their brains. Although some of the fMRI stuff borders on that.

and (b) more generally interested in the broader question of what intelligence could be rather than how it is realised in humans specifically. But would you guys recommend I just go into cogsci/psych/neuro anyway?

If you want to go into AI, you need to at least learn the vocabulary. Start a Khan academy intro course (free). Start a Coursera intro course ($50/mo., many course lectures free online). There is no avoiding basic math concepts/vocabulary if for no other reason than to be able to recognize the dividing line between problems that can be solved by mathematical methods and problems that can’t (your opening question). So I’d recommend brushing up on your math either way, up to at least Calc II (integration). Khan. Coursera. Youtube. The Internet. Remedial classes at your uni.

Also, would doing the imperial conversion course help me become a useful contributor to the field? The first half of the course has your standard C++ programming, computer architecture and Logic programming fare, with the possibility of taking options in the 2nd half of the program in probabilistic/statistical computing, machine learning, algos, and AI. I can’t do the AI-focussed masters due to lack of background.

I’ve picked up the implication from your post that you are less interested in the math-y side of things. This is perfectly OK. Rather than trying to fake it, just learn the ABCs you have to know in order to work in the field so that you can speak the language, otherwise, you’ll have no idea what everybody’s talking about. Then, focus on your assets – your education and aptitude.

Much of the history of comp sci has been a history of taming the beast of symbolic logic in its purest form (the digital computer) to something that wetware human brains can actually interact with. Natural language processing (NLP) is one of the oldest sub-fields of AI and it has been one of the holy grails of comp-sci from day one to be able to write instructions in natural language and have the computer understand and perform the desired action(s). As you know, Alexa, Siri, etc. are closing in on this goal. They’re still light-years away from what we mean by an intelligent digital assistant, but they’re also light-years closer than anything that has come before. So, the AI design problem actually has two faces:

User tells the AI to do something  
AI figures out what the user wants the computer to do  
Computer does stuff the AI tells it to do  

Most of the focus is still on the bottom-face, the AI translating semantic tasks down into computer code. But those semantic tasks still have to be entered through some clunky interface, either touch-screen, Web interface or talking in a very special way to Siri or typing selected words at Cortana, and so on. Remember the early days of Google when you had to chop up your search phrase into weird keywords to try to prod the search engine to give you what you wanted? Nowadays, you pretty much just type what you’re looking for and Google finds it. So, Google’s search capability has already made the transition from “clunky and mechanical” to “smooth and almost natural”. The wider AI problem is to do this for all sorts of tasks which we could wish a computer to perform on our behalf. In short, building the human-computer interface of the future is one of the most wide-open problems in AI and it’s the kind of problem that requires design-based thinking (Paul Graham’s note on this which I love), rather than machine-based thinking.

Also, how much raw mathematical ingenuity do you need to make theoretical breakthroughs in this field? By studying somewhat hard I got As in a highly competitive East Asian exam system, but I didn’t really have Olympiad level spark nor a real appreciation for mathematical reasoning and problem solving. I respect maths and formal logic a lot more now having encountered it through economics and philosophy, I’m not the most mathematically intuitive guy even in my econ program. But I am willing to put in the time to learn and solve problems from textbooks in my spare time.

My advice:

  • Learn the AI-ML vocabulary.
  • Have the basic maths under your belt to at least parse what people are talking about e.g. what a gradient is, and so on. No need to get into the advanced stuff, just have the ABCs down.
  • Focus on design, design, design. This is the blind-spot in AI. Have you seen Sophia or those robots at Boston Dynamics that move like the stuff of nightmares? AI, to be useful, needs to be useful for humans. That’s where I’d put my focus if I were in your shoes.


قالب وردپرس