One of the most exciting results of training machines to analyze/gain insight from online conversations will be a self-aware machine. This artificial mind will begin offering recommendations. And, eventually, when humans trust the insights and recommendations of this machine, they will give it access to executable actions in the real world.
Bias matters in such a scenario. If we train the machine only to be efficient, then there may not be much left of current human society. On the other hand, if we train the machine in compassion, we must also train it to steer away from “stupid compassion”, whereby toxins slip through the cracks.
Wisdom must supersede efficiency in the humans who train these machines, giving the synthetic mind a balanced perspective on human issues and evolution.