Thursday, July 7, 2016

Put Your Glasses On, You'll Hear Better

Since last week's post, I've been doing a lot of reading into game engines, the Entity-Component-System pattern, talking to a few people who actually designed RTS engines from the ground up and made kick ass RTS games with them (Command & Conquer and Grey Goo, for example). The comments you guys gave me, both here and on Facebook, were very helpful and allowed me to switch my perspective a bit and start building up the required Systems and Components for my prototype.

That's the first milestone, really. A working prototype - a Vertical Slice. So, by next week I'm hoping to define the MVP: the Minimum Viable Prototype (yes, I know...). It is quite defined, conceptually, but not architecturally.

However, in the mean time, I'm going to present a small "diversion".

There's an old joke, a banter, telling someone to put their glasses on, so maybe they would hear better. It always gets the polite laugh at meetings, when said to someone who isn't paying attention, for example. But, does that joke hide some truth to it?

You know how when someone loses a sense, the others compensate? When someone loses their sight, the hearing compensates, or becomes more attuned. This happens consciously - obviously, that person would actively try to compensate and understand his environment with his other senses. Touch, Sound etc',

The Israel Children's Museum in Holon has an exhibit called Dialogue In the Dark, aimed at exactly that:
In this extraordinary exhibit, we can't see anything, but we discover a whole world and maybe, especially, ourselves.
During the tour, blind guides lead the visitors through dark but designed spaces: nature, a noisy pedestrian crossing, a port, a market, and a pub.
Blind and vision-impaired individuals take an active part in opening the visitors' eyes in the darkness, demonstrating that their world is not poorer, but simply different.
The exhibit seeks to lead the visitors to have a unique experience of themselves, to introduce them to the rich sensory world hidden within each and every person, and to create an unbiased encounter between blind and sighted people.
The Attentional Resources Theory in Cognitive Psychology suggests that this also happens unconsciously. There are several competing theories on this matter, so far without a definitive answer as to which is the most encompassing and accurate. But, the gist of these theories deals with there being an Energy supply, a limited one, that allows our Cognitive processes to function - process the data cascades from the various sensors and modalities and reason those torrents into actionable information for our use. An important thing here to note, without going into the finer details of these theories, is that the limited resources are not a constant; different situations can allow for different ad hoc pools of said Energy. Perhaps, similar to how bursts of adrenaline can enhance strength or stamina.

However, while the Dialogue In the Dark deals with people who are blind, I'd like to ask what about people in the process of Becoming Blind?

Say someone has Glaucoma. Or just a really high glasses number and he isn't wearing them. That person hasn't lost that sense. In fact, that sense is a heavier burden on the energy store. This may result in slower reasoning, slower distinction between visual queues and - as these parts of the cognitive process are shared, are central to our overall awareness and may become a bottleneck for other modalities and peripheral processes - essentially, worse hearing.

So... if you put your glasses on, you may actually hear better, or at least perceive and process it better.

So what?

I'm experiencing, in Unity, as part of the learning process, with building a simple AR or VR experience, that will try to convey these modal exchanges. I've actually built, already, two Scenes for this - one a basic Google VR and Vuforia AR scene, utilizing a mobile phone's Main Camera; the other just a Google VR scene, set in the Urban map I have set up for my RTS project.


This isn't a game. Just an experience. Something bite sized, that will show diminished or impaired vision and try to simulate some form of induced deafness. With a few "in game" options, like Putting your glasses on, or closing your eyes - or maybe meditation.

The idea is to make it a quick learning experience.

You may notice that in the beginning of this blog post I described this endeavor as a "diversion", with quotation marks. It's because the two - this experience and the RTS - are related. Closely even.
I don't mean to make an RTS to simulate how vision impaired people play one, of course.
The relation comes from the design of the Machine faction, the Smart Agent enhanced faction. These same ideas which stem from the Attentional Resources Theory, may very well be the chink in the AI army's armor.

We should all keep that in mind, for the coming Singularity Apocalypse - Processing Power, that's the Achilles' Heel of Skynet.

More to come....


Sunday, June 26, 2016

RTS Engine UML

This is more of a technical post. A request of sorts.

I'm building my RTS from the ground up, in Unity. This means building the RTS engine itself. I've tried and followed several tutorials and full guides to get some really nice results - and in the processes finally understood how Unity works and how to properly arrange the source codes and assets.

However, before I get down to properly implementing the engine, I'm doing some more research into various approaches on how different RTS engines are built.

Currently I'm compiling UMLs from such engines as:

Since I'm using Unity as the underlying engine, the UML I'm compiling is more of a logical nature, dealing with the Hierarchy of in game objects - Units, Buildings, Players, Goals - and the Global Managers - AI, Resources and other relevant game states. Everything else is handled by the Unity engine itself.

I'd love to get some advice and pointers on what and where to look for more info on Real Time Strategy frameworks and object Hierarchies.

In one of my Navy roles, as an ILS officer, we printed out a giant, 2 meter long poster, describing the Cradle to the Grave Logistic Support method. Worked well in guiding many projects,

Ideally, I'd be able to compile and print out a nice wall scroll of a suitable UML, to admire while I code. Also, doodle all over it with changes and improvements.

Sunday, June 19, 2016

Nunc Id Vides, Nunc Ne Vides

I've been playing Real Time Strategy games for ages. Since Dune 2, through the C&C and Craft series. I'm more than a little biased towards the C&C series. I love it. So much so that I was at one point a co-director of the world's leading C&C fan site - PlanetC&C, on the GameSpy Network. In the Command & Conquer community I'm known as Cypher - writer of the Canon of C&C, not the nicest of the Official Forum moderators and all around C&C Lore Guru. And though I'd love to some day have a guiding hand in Rebooting the Tiberian lore, it's time to create my own world.

About a Year ago or more, Mark Skaggs (former Red Alert 2 producer and current Zynga exec) came to the C&C community in search of creatives, of modders, to join him in discussing and brainstorming a new, highly moddable mobile RTS. Not a a whole lot has come out of it, as far as I know, except for Empires & Allies 2. However, I came out of those sessions with an interesting Core Mechanic to try in an RTS.


Skynet is Coming

Well no, I'm not a futurist. Nor do I intend on foretelling our doom. It's just a story.
I am, however, a software architect, working on a Learning Smart Agent. A smart agent capable of Cognitive reasoning, or the simulacra there of, and applications in many different fields, from monetary fraud, through Physical and Network Cyber Security, to actual Civilian and Military security. A single core product, with the flexibility and scalability to be applied to pretty much any field.

A Smart Agent which, in concept, is really not unlike the Machine or Samaritan in Jonathan Nolan's Person of Interest TV show.

There is no shortage of similar solutions nowadays. Most of them based on large computing clouds, like IBM's Bluemix platform or even Google's own datasphere, based on their interconnected products. I suspect that even PornHub's Bang.Fit sexercise system might be based on a similar solution, if not an actual Bluemix app.

Some Sci-Fi stories deal with AIs becoming self aware and going rogue, trying to take over humanity. A Singulairty point, beyond which we humans cannot possible defend ourselves against a Super Intelligence so far beyond ours, so alien to ours, as our is to that of an Ant. We have no chance.



In Person of Interest, however, though the story deals with the Admin teaching the Machine morals, and the Machine having some sort of ability to Ask new questions, a form of curiosity usually associated with sentience, the more interesting concept presented in the show doesn't require any actual sentience or self awareness. Though, certainly, the ability to calculate and "predict" the chances of a crime occurring, or even basic morality based reasoning, would fool any Turing Test.

But, as I said, I'm not going to foretell our doom.
Hell, if anything, our doom will come from the Blissful Stagnation our AI overlords will allow us.

And while these ideas are intriguing enough to discuss ad infinitum, what caught my eye, as a Game Designer, was the God Mode presented in the show.

God Mode

The Machine sees everything, hears everything. Hell, with Cell Phones it can even go SONAR (it didn't, yet, in the show). It is connected to the city's CCTV system. This gives any Human Agent of the Machine - provided the Machine is supplying real time information - the ability to "predict" where a threat is coming from. Or, rather, the Machine does the actual Spotting, telling the Human Agent where to shoot, before its enemy even enters the Line of Sight. Not Unlimited Ammo or Unlimited Armor, but certainly a realistic sort of God Mode. Kinda.



In FPS it's easy. But, as a long time fan of RTS, I wanted to apply this "realistic" God Mode mechanic on the tactical and strategic levels. Thus was born Project NIV.

When approaching Game Design, I like to work on both Fiction and Gameplay together. I believe they drive each other. The idea being that Intuitivity of Control is driven by Immersion, and Immersion cannot come from a detached experience between the gameplay of the world and its narrative. The Goals are clearer, the Controls are clearer - the experience flows better.

This is why I described, above, briefly, the abilities of such an Agent and have opened the door for discussion about the AI controlled or Smart Agent affected world. Many of these considerations have come into my design - dealing with Abilities and Weapons as well as Weaknesses of the Machine driven faction of Project NIV.

As I mention in the About page, I'm a Psychology student, which goes hand in hand with being a developer on a Learning Smart Agent. I implement many of the things I learn in Cognitive Sciences in my software. Thus it seems only natural to draw similarities and correlations between how an AI or a network of connected Surveillance Fed Smart Agents would operate and how a Human Brain would operate.



Namely, for the Core Mechanic of Project NIV, I'm referring to the Attention Resources Theory.

More about that, in my next post.

Friday, June 17, 2016

Welcome to my Blog

Hey everyone and welcome to the first entry of my Game Design blog.

I'm Cypher. In the About page you'll notice that I'm a Game Designer. Why? Because I decided to. It is my passion, my obsession and as soon as possible, will be my livelihood. That's the goal at least.

What I do, however, bring to the table is experience in many different fields, from each of which I've taken lessons, ideas and inspiration towards Game Design. Be it my work as a Software Architect, building up Machine Learning based agents for various industrial fields, from Military, through Cyber Security to IoT and Black Markets. Or my military service, both as a combatant and a commanding officer. Or my Psychology and Cognitive Sciences studies - which actually go very well with my Machine Learning work.

Specifically, game design wise, I've been a contributor to the Command & Conquer franchise since the early days of Westwood. Starting as the writer of the Canon of C&C Encyclopedia, which was later used in EA's in house C&C Bible to various extents, through important gameplay and UI designs in the later games and all the way to working with the writers of the last C&C games on the stories and cinematics.

Today, I'm working on several game projects.
  • Caracri - a turn based Go like game, created by a friend of mine and built on the LibGDX framework, for cross platform play on the Web, Android and Desktop (maybe we'll do an iOS port later).
  • Fort Triumph - a Fantasy Turn Based Tactics RPG.
  • Project NIV - which is going to be the core of this blog. As I discuss core mechanics, ideas and gradual Unity implementations, as well as related side projects.

So... Put Your Glasses On, You'll Hear Better for it, and let's get on with this thing.
p.s.
NIV is a code name for the project. Basically, short for Nunc Id Vides - which is the first part of the Unseen University's motto.