AI has really gone through leaps and bounds, but how far has it come and where can it go?
Past
Depending on how loosely you define AI, it has been developed for quite some time. In fact chess or tic tac toe computer players would count. Apple with its program "Lisa" that would talk to the user as a therapist was quite impressive for the time.
Anything that simulates intelligent behaviour can count towards AI. This sadly means a whole world of what you might consider non intelligent algorithms do in fact count and are called AI. So a chess algorithm that searches every possible move is in fact AI. It looks like a real player making good moves. The actual complexity of this program is steering the AI to only go down certain paths for its moves since some paths might never lead to good outcomes. The other point which is interesting is how to tell the computer what board setup is good. You want to compare chess boards regardless of the move, and so each board must have a score so the computer can know what moves lead to a more valuable outcome.
You might say that all that matters is a winning board. However, since you might trim certain moves and all moves that might have followed you need to be able to compare. Similarly you want to search the best path first and all subsequent moves first. Why? Because the computer must be fast and there are a near infinite or actual infinite number of combinations.
Present
To my personal opinion the most impressive AI work nowadays is in Neural Networks. Software has been modeled to try and "copy" roughly how the brain works. This makes a lot of sense since we are trying to simulate our own thought and logic, which is what we see as intelligent. This work is best at finding patterns. No new ideas per say come about from neural networks, it doesn't learn new concepts more than it was trained to learn. It analyzes and tries different ways to classify the data it gets, and the programmer writes a script to tell the computer if it is right or wrong. Using this feedback the computer might randomly try new arrangements and values until it can get it right. But it won't learn without feedback, and only feedback you give it. So it can't learn new concepts unless there is feedback for them as well. This seems to limit AI currently to being unable to reason new ideas or solutions like humans can. Humans take concepts and extend them beyond what they know, AI cannot yet do this.
Future
There is a lot of thinking about AI in the future. All I can say is that if we want AI to be as good at creativity like humans we might need the hardware to be a different type of hardware. It might need to be carbon based not silicone. There might need to be major upheaval of computers before any true AI is possible. It will be interesting to see how close we can get to humans or if we can get further.