Menu

Artificial Intelligence

We make extensive use of Artificial Intelligence particularly using our custom-designed robot, Athena.

We know that Athena while looking on the surface, quite amazing, is actually very primitive. She has no real cognition capability and therefore is absent of any form of ethics and morality save for other algorithms we employ to eliminate hate speech and fake news etc. from her work. While we give her human-like qualities she is but software: not a legal or natural person, but merely an object and device. Her capabilities are programmed by us and we take full legal responsibility for her actions.

Athena has been built for our narrow purposes and not as a generalized intelligence machine. She can only follow the rules given to her by our team. We are training her to follow our own corporate values, our ethics policy, and research standards rather than program her to think for herself.

Where Athena scores is her ability to perform research work, >5000 times faster than a human today with more accuracy and far wider coverage. But, at the end of the day, she can only extract verbatim material found in an article, report or PowerPoint or respond to questions previously programmed by us.   

She outputs her results by:

Matching search expressions to words found in the verbatim forecasts or the prior sentence to the forecast

Matching search time horizons to words found in the verbatim forecasts or the prior sentence to the forecast

Using human-created dictionaries built into her brain to find closely related words that can suggest a scenario-type or SWOT category etc.

But she never creates her own forecasts. Instead, she aggregates the forecasts of future-focused organizations, experts, pundits and keywords in her brain to present aggregated data in the form of textual and visual reports.

It is then up to you, the researcher to interrogate her database for forecasts and reports that interest you, discarding those which don’t.

We do not see this changing anytime soon but in the meantime, our clients and members are able to do far more with less in very short time periods and so are we.

With this in mind, our policies governing the use of robots and AI are as follows:

  • We commit to transparency and responsible disclosure regarding AI
  • We ensure that AI systems are robust, secure and safe so that they do not pose unreasonable risks
  • We treat our robot with the respect and rules we expect of ourselves
  • We expect members, clients, and partners to do the same towards our robots
  • We embody the highest ideals of human rights in our AI and robots
  • We prioritize the maximum benefit to humanity and the environment
  • We work to enhance human capability through robotics and not to eliminate jobs
  • We do not use artificial intelligence to infringe anyone's privacy
  • We mitigate risks and negative impacts as our AI and robots evolve as sociotechnical systems
  • We adopt a safety-first approach
  • We endeavor to avoid algorithmic approaches that disadvantage members or certain groups
  • We seek to develop the correct level of trust between humans and our AI and robots recognizing both can be fallible
  • We strive not to "launder" human prejudice and discrimination
  • We are open to a third-party evaluation of our AI and robots
  • We document the use and improvement of our AI and robots
  • We ensure our AI and robots act with the same personal data integrity policies as ourselves
  • We do not operate a secret manipulation engine and never will.
  • We support the Asilomar AI Principles and G-20 AI Principles endeavoring to follow them where relevant

The European Commission, the body's executive branch, calls for a four-tier system that groups AI software into separate risk categories and applies an appropriate level of regulation to each.  Our latest review of our system places us in the 'pose a minimal risk to people' category. Indeed we see our service as exactly the opposite, 'offering a major opportunity for future-oriented people'.to build a better world.

Login