The Terminal Time project runs on a Macintosh G3, with at least 128 MB RAM, ultra wide scsi hard drive and external connector. All programs and the entire AV library it draws upon are stored on a 36 gigabyte external drive. It also uses a 2 channel powered mixer and uni-directional microphone interface to its applause metering system. It requires only a standard data projector or video converter and sound system to play in any venue.

The Terminal Time Artificial Intelligence architecture is based on 3 major components: knowledge base, ideological goal trees, and story experts. The knowledge base is a vast knowledge web, which utilizes the top 3000 terms from the Cyc Corporation's upper Cyc ontology, as well as several thousand custom classifications. Ideological goal trees are utilized to choose and join historical events found in the database in accordance with viewer responses. Story experts utilize narrative conventions to plan, compose and evaluate final story texts. The following diagram illustrates how the differing components of this architecture inter-relate:

Once the narrative generation system renders the final narration. Video and audio tracks are selected to illustrate the 6 minute story segment. This search is based on weighted keyword indexing on each video "clip." Each clip may contain as many as 10 keywords from over 300 used in the database. Once video and audio clips are selected, they are joined into a storyboard in Terminal Time's multimedia architecture.

Lastly, the multimedia architecture renders the final story, cutting and splicing video audio and narration tracks in real-time to show to the audience.