We make extensive use of Artificial Intelligence particularly using our custom-designed robot, Athena.
We know that Athena while looking on the surface, quite amazing, is actually very primitive. She has no real cognition capability and therefore is absent of any form of ethics and morality save for other algorithms we employ to eliminate hate speech and fake news etc. from her work. While we give her human-like qualities she is but software: not a legal or natural person, but merely an object and device. Her capabilities are programmed by us and we take full legal responsibility for her actions.
Athena has been built for our narrow purposes and not as a generalized intelligence machine. She can only follow the rules given to her by our team. We are training her to follow our own corporate values, our ethics policy, and research standards rather than program her to think for herself.
Where Athena scores is her ability to perform research work, >5000 times faster than a human today with more accuracy and far wider coverage. But, at the end of the day, she can only extract verbatim material found in an article, report or PowerPoint or respond to questions previously programmed by us.
She outputs her results by:
Matching search expressions to words found in the verbatim forecasts or the prior sentence to the forecast
Matching search time horizons to words found in the verbatim forecasts or the prior sentence to the forecast
Using human-created dictionaries built into her brain to find closely related words that can suggest a scenario-type or SWOT category etc.
But she never creates her own forecasts. Instead, she aggregates the forecasts of future-focused organizations, experts, pundits and keywords in her brain to present aggregated data in the form of textual and visual reports.
It is then up to you, the researcher to interrogate her database for forecasts and reports that interest you, discarding those which don’t.
We do not see this changing anytime soon but in the meantime, our clients and members are able to do far more with less in very short time periods and so are we.
With this in mind, our policies governing the use of robots and AI are as follows:
The European Commission, the body's executive branch, calls for a four-tier system that groups AI software into separate risk categories and applies an appropriate level of regulation to each. Our latest review of our system places us in the 'pose a minimal risk to people' category. Indeed we see our service as exactly the opposite, 'offering a major opportunity for future-oriented people'.to build a better world,
Last reviewed by Dr. Michael Jackson: Digital Ethics Officer, 30 April 2021