Artificial intelligence (AI) has never been about developing a computer-based system which can literally "think", although that's been the popular perception, not helped by the generations of scifi writers who've written about androids (cf Data in Star Trek NG).
Very briefly, AI has really been about developing machines capable of logically reasoning within circumscribed boundaries about a limited range of relatively narrow subjects. And even then success has been slight. The chances of machines developing "empathy" within the foreseeable future appear to continue to be, um, (including zero?) none.
To be truly intelligent, computers would need to have the capability of forming genuine opinions (not just the results of programmed sequential logic) and being able to consistently assign qualitative values (feelings) to those opinions. Intelligence isn't pure reasoning. And empathy is not the outcome of pure reason.
Human reasoning is often only partially based on pure logic. Computers reason on the "if this then that else some other prescribed variable" principle. Humans reason on the "if this then maybe that, this or both, else perhaps something completely different arrived at by a very circuituous train of thought" principle. We tend to be more or less "intuitive" in our reasoning. The amount of processing power required to reproduce that using the current computer architectures is mind-boggling - and not available.
Until a different type of computer is developed which works on the same principles as the human brain, I doubt if any computer will ever come with in a bull's roar of true self-awareness ...
empathy? Huh!


