Game and AI-Improvement Idea

IBM PC

    Next

  • 1. Star Control 2 question
    I started playing this game again, after some years. I would like to do some things different than the first time I played it, for example the first time I played when I liberated the Spathi homeworld I insisted on their alliance, and while I got the Umgah Caster from their moon after they moved back to their homeplanet, they imprisoned themselves under a slave shield. Is there a way to get the Umgah Caster from them WITHOUT the Spathi vanishing from the game? bye Michael
  • 2. The game evolution
    I just recently got this game and have played it and enjoyed it. However i haven't been able to evolvle to a sapien. Every time i get any intelligent life (just a few away from the sapien)the game ends. Is there an option to change that or am i just doing something wrong? THanx for your help
  • 3. R: Lords of the realm 3
    > >> if someone have a serial number for lords of the realm 3 , send me > >> please. my email XXXX@XXXXX.COM > > > >Do you also walk into bars and loudly proclaim that you want someone to > >steal you a car? How insulting to think you can come into this group and > >ask for shit like this. You think we are all thieves? > > No, he apparently thinks we're all time travellers. AFAIK LotR3 was > delayed and hasn't been released yet. I smell troll. Alfredo

Game and AI-Improvement Idea

Postby Ken Hausle » Mon, 25 Sep 2006 02:02:31 GMT

Here is an idea that I think could both be a different and fun sort of game 
and offer the opportunity for improving AI.  It is very simple and involves 
creating a game from the ground-up where the objective is for all players 
(human and AI) to cooperate in order to maximize overall score.  I think a 
"builder-style" game would work best where the player works with others to 
form some kind of shape or structure or ship or city or whatever.  As one 
plays the AI actions can be closely observed and modified as necessary to 
improve the AI's performance, and thus, further increase the overall score. 
There would be many ways this could be set up.

Just an idea......

Ken H.



Re: Game and AI-Improvement Idea

Postby Vinny (VEC-Games) » Mon, 25 Sep 2006 05:19:30 GMT

"Ken Hausle" < XXXX@XXXXX.COM > wrote in




Your idea sounds very interesting to me. Could you elaborate perhaps? I 
would really like to hear more details about how you think this would work. 
I get the idea of coop play with the AI, but not sure how you are thinking 
this would all come together.

Re: Game and AI-Improvement Idea

Postby Ken Hausle » Mon, 25 Sep 2006 06:48:19 GMT

"Vinny (VEC-Games)" < XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...

I believe this approach could be applied to a wide variety of games
including both board games and PC games, but I suspect it is most suited to
the building-type games (e.g. Railroad Games, Empire Building Games, etc.).
Most well designed games have enough nuances that it can quickly become
quite challenging to figure out how to optimize maximum score -- the fun of
the game would be that you are all working together, so you don't have to be
concerned about getting attacked or other surprises. There could be
considerable variability based on the number of players, how many AI
players, how many human players, various setup options, game-to-game
randomness, etc.

I'll give you an example. My daughter and I played the board game
Ticket-to-Ride http://www.ticket2ridegame.com/ today. We played
cooperatively instead of competitively with our objective being to maximize
our overall score. This game is one of those "simple to learn but
challenging to master" type games. The game was just as fun (if not more)
that each playing individually against one another. We had to make several
decisions during the game, evaluate options, compromise, etc. This was the
first time we played and I already know there are many things we could have
done differently that would have resulted in a higher score, so I want to
play again. There are many other board games I can think of for which this
style of play would be fun --- Tigris & Euphrates comes to mind
http://www.mayfairgames.com/.

With respect to PC games, even a game like Civilization could be setup in a
manner where the goal is to maximize the score of all players -- some
thought would need to go into the scoring system, but then the focus of the
game would switch from competitive play to cooperative play. Ultimately, it
would be better to set the game up from scratch with cooperative play in
mind, but I believe this approach could readily be applied to the types of
games that are already produced. Its just a different way of playing.....

If it was set up from scratch, I think it would be neat to somehow make the
AI programmable, so that you could change its behavior. For example, lets
say you were playing a game and you realized if the AI does a certain move
this will really score a lot of points for everyone. If the AI doesn't make
this move, you could later re-program it so it would recognize the value of
the move, and then make it. This manner of setting the game up creates a
real incentive to optimize the AI in ways that I think may not typically be
fully explored......

Ken H.




Re: Game and AI-Improvement Idea

Postby Dweeb » Mon, 25 Sep 2006 11:43:13 GMT

There are a few cooperative boardgames. Try Lord of the Rings or Shadows 
over Camelot. Both are great fun with the right group of players.

Re: Game and AI-Improvement Idea

Postby Andrew McGee » Mon, 25 Sep 2006 19:10:11 GMT






twenty years ago there was a game called United Nations, which worked on the 
same principle - fromthe name you can probably guess the setting and 
context.

I once played it with a group of experienced boardgamers, and we all had 
great trouble getting our minds round the idea of co-operating rather than 
competing.

Interesting idea, but the game would have to be difficult enough on a 
co-operative basis. Otherwise there would not be enough intellectual 
challenge to sustain players' interest.


Andrew McGee 



Re: Game and AI-Improvement Idea

Postby Gandalf Parker » Mon, 25 Sep 2006 23:03:47 GMT

"Ken Hausle" < XXXX@XXXXX.COM > contributed wisdom to 




I like it. I would probably buy such a game. It sounds like one that would 
be right up my alley.

You should go to SourceForge.net and start a project on it. Programmers and 
graphics people could join in. 

If you get a working alpha version of it at some point then feel free to 
contact me about getting it published. 

Gandalf  Parker

Re: Game and AI-Improvement Idea

Postby Ken Hausle » Mon, 25 Sep 2006 23:13:03 GMT







Gandalf,

Thanks for the suggestion.  I'm not a programmer, but I have plenty of 
ideas!  I'll take a look at the SourceForge site.

This idea stemmed from a book I'm reading right now, "The Post-Corporate 
World" by David C. Korten that convincingly argues why evolution actually 
favors cooperation over competition.  It seems to me that we all need to put 
more emphasis on cooperation these days if we want a better future.

Ken H.

I'll take a look at SourceForge. 



Re: Game and AI-Improvement Idea

Postby Vinny (VEC-Games) » Tue, 26 Sep 2006 09:15:27 GMT

"Ken Hausle" < XXXX@XXXXX.COM > wrote in



(snip)


I wonder if this cooperative play could extend to combat? Certain games 
allow you to ally with an AI player. However, theres usually not much 
cooperation in the same context you describe. I am trying to figure out 
some ideas along the lines of coop between players. Its a different way of 
approaching things. I have never enjoyed non-combat games and was thinking 
maybe it would be fun to somehow need to work alongside an AI player in 
order to fight off some common enemy. I suppose it doesnt have to be a 
living enemy. Maybe you have to coop to survive some natural disater like a 
flood or volcano or something.


I have played a few games that allow for the user to write his own AI 
script but im not sure if thats what you mean or not. Writing AI script can 
be quite involved and beyond most folks time.

Re: Game and AI-Improvement Idea

Postby QQalextiQQ » Tue, 26 Sep 2006 12:25:29 GMT

"Vinny (VEC-Games)" < XXXX@XXXXX.COM > wrote in




It's kind of like that in Dominions. You equip commanders, set up initial 
positions and give initial orders. After all AI fights on his own. Usually 
when you make overly complicated plan it backfires. So it teaches you to 
cooperate with AI (tactical AI that controls your troop) and make sure your 
plan is straightforward enough to follow :)

Something similar (but non-combat) exists in Caesar III and its sequels. 
Effectively you have to cooperate with walker AI, meaning planning city in 
such manner so that AI behaves the way you want it to.

I would be interested to see the game which is built complete around 
cooperation with AI, though it may be hard to implement. Something like 
playing bridge with AI as a partner can be very frustrating :(

Alex.


Alex.

Re: Game and AI-Improvement Idea

Postby Vinny (VEC-Games) » Tue, 26 Sep 2006 15:52:44 GMT

 XXXX@XXXXX.COM  (alexti) wrote in






Ah, good example for me since i actually played caesar III alot back in 
the day. I have to admit though, the walker AI was silly and not really 
intuitive. It actually killed some joy when i saw that the market lady 
constantly would go down empty roads. She should be able to see theres 
no houses down the road. I read somewhere that the AI uses a pattern to 
determine which paths to take. So c3 in a way was more like a puzzle 
game.


I have to agree with this being hard to implement in a way that is 
enjoyable. Someone who is adept at AI design might be the first to do 
such a thing. The fact that coop AI is not very common says something 
about how hard AI design is.
 

Re: Game and AI-Improvement Idea

Postby Gandalf Parker » Tue, 26 Sep 2006 21:46:01 GMT

Vinny (VEC-Games)" < XXXX@XXXXX.COM > contributed wisdom to
news: XXXX@XXXXX.COM :


There was an old strat game called Celtic Kings which allowed you to ally
with an AI. You could give orders to his units. Usually you could leave
him alone in his territory, then occassionally set someone moving or
build a unit in order to keep him from doing something stupid. And I
would help him out by posting scouts to his area since he could see their
reports. And I would have him turn over resources to me that were deep in
safe territory because he would waste resources fortifying it while I
would direct the resources to the front lines instead.


I wouldnt even mind if it were the type of game where you could
eventually decide to play as a (@#$ against everyone. But if it started
out that survival meant you better pick an ally and then the game allowed
the allies to win without eventually HAVING to declare war and
backstabing them, then I think it would be a good game.


It DOES have to be done from scratch though. The fairest ones that Ive
seen had the host program seperate from the client program. And when Im
playing thru the client program, every action I take and every response I
get is presented by short codes and variables. That way someone could
design unique AIs that react to changes in the game. I think its a great
way to go. Instead of programming an AI to whatever level the programmer
is capable of programming one and tends to always play one way, they
create a situation where everyone can create variations of AI which the
game can randomly select from.

But it does have to be done from scratch. That type of access can be
impossible to add to a game later, and even impossible to write into a
sequel.

Gandalf Parker

Gandalf Parker

Re: Game and AI-Improvement Idea

Postby QQalextiQQ » Wed, 27 Sep 2006 11:42:31 GMT

"Vinny (VEC-Games)" < XXXX@XXXXX.COM > wrote in 






Nevertheless, the cooperation with walkers AI was really a cornerstone of 
this serie. When the same company released similar game (was it Children of 
Nile?) without those annoying walkers the gameplay pretty much disappeared. 
The real challenge of Caesar III and its sequels was to invent efficient 
development patterns that would work with weird walker AI :)

Alex.

Re: Game and AI-Improvement Idea

Postby john graesser » Fri, 27 Oct 2006 15:44:02 GMT





game
involves
score.

The way it was set up years ago was M.U.L.E., there you lost out if you were
cutthroat with the ai. You had to throw them a bone or two or the entire
colony stagnated.



Similar Threads:

1.another idea for a log based ai

yes, this idea for me pops up occasionally.
pardon the cross post, it is only 2 groups, just ones with long names...

people here may remember me previously having obsessed on the idea.
I still have no idea of originality or whether it could work. however, now, 
I have come up with an idea of how to test it.

the idea would be to simulate a "cat" in an effectively text-based world.
as time goes on various things would show up or things will happen (need to 
eat, crap, sleep, play with other cats, ...). the test would be if it is 
able to figure out what to do in these situations, and if it is able to 
maintain it's mood/state eventually.

similarly, testing could help in cleaning up the algo or pointing out what 
is broke. of course, it may also be broke, or it may be useless for games, 
or it may be unoriginal.

I don't really know, and I don't know where to ask.

oh well, whatever, flame if you want...


idea (was originally an email):
---
this is an idea that has actually been beating around my head for years, 
occasionally showing up again for whatever reason (among others related to 
emotion).

now, recently I was reading more of a psychology book, and they were going 
some into memory and learning, and I had noticed that in a general sense, it 
was essentially the same (or a very similar) algo to what I had before 
imagined for a log ai. they went into a little more detail though, showing 
that effectively I only have to really process events previous in time (as 
opposed to 2-way).

this gives me more confidence, the algo could work, and likely would have 
reasonably light performance demands (compared with many other forms of ai 
at least, but still likely a bit more than hard-scripted ai). the advantage 
is that it could be possible to train the behavior of 
characters/monsters/... however, this may-well be ineffective. at least in a 
general sense it could be possible to have them go against each other and 
try to learn how to operate effectively, and maybe occasional human 
involvement could help in making the behaviors "sane" (I am imagining a 
"stick of punishment" here, eg, used for prodding at any ai's that do 
something out of line).

basic behaviors are still necessary to be hard-coded though, others could 
possibly be learned. the important issues are how effective it could be and 
whether it is computationally feasible.

another mystery is whether it would be more or less work than just 
hard-coding it.

I have no idea of any originality here though.
similarly, this is a bit much for me to really try out presently.

rough idea at present:
the world, as opposed to the mass amount of direct state changes and method 
calls common to many game ais, one uses a general form of "event system". 
each general event effectively has some properties, and applies to the 
world. for each character, the events are culled and filtered some, eg, 
location specific events that are invisible are dropped, the event could be 
modified for whether the event applies to self or someone else, ...

ok, so all remaining "relevant" events get recorded to the log, along with 
any recent actions or similar. the log could be trauncated after a little 
while, events too far in the past aren't really relevant.

ok, from the recent events it is necessary to generate a "stimulus".

this would be done by effectively scanning the log, determining the weight 
for each event (diminished with time backwards), effectively the biases for 
the events are multiplied by the weight and added to the stimulus.
this could be made about O(N).

afterwards, a second pass is made effectively adding the stimulus*weight to 
the bias for each event-type. this could help to associate particular types 
of events with each other and particular actions.

this part would likely be O(N^2), and would thus favor shorter logs. it may 
be possible to get this to O(N) as well, eg, by maintaining some state for 
recent event/behavior types and adjusting for each new event (eg: 
multiplying weights by a constant time degeneration and applying current 
stimulus and weights), thus eliminating the need to make a pass over the log 
for each event.

also part of the stimulus would likely be a more abstract "strengthen" or 
"weaken" bias, which would be associated with some events (eg: those used 
for training behaviors), but would be naturally close to neutral for more 
ordinary events (possibly with a slight bias twards weaken to cause the ai 
to forget less common patterns). comparatively the other more usual biases 
(eg: like anger, fear, health, or whatever) would be weaker, but would help 
differentiating between actions (attacking/running from enemies, running 
twards health items in low health situations, ...).

behaviors could be chosen based on the current stimulus. my thought is that 
behaviors would be chosen based on proximity to the current bias for a 
behavior. nearby behaviors will be generated, and may further effect the 
current stimulus. some behaviors could involve generating events, eg, 
generating something like a "pissed off" event could be help in maintaining 
"mood" or similar. limits would probably have to be set on how often events 
could be generated though to avoid trying to generate the same action 
repeatedly. the thought though is that events could have a natural local 
"weaken" bias that degenerates over time (eg: twards some more neutral 
bias). reinforced behaviors would get a more reinforced natural bias.

positive events would be a natural strengthen and negative ones a local 
weaken. this might help in reducing destructive actions and generally 
driving behavior or such.

maybe I am just stupid, or maybe there is no point in writing this, I don't 
know. at least by intuit it seems like it might work...



2.curious inhibition ( another idea for a log based ai)

"cr88192" < XXXX@XXXXX.COM > wrote in message 
news:QnFEd.8783$ XXXX@XXXXX.COM ...
> yes, this idea for me pops up occasionally.
> pardon the cross post, it is only 2 groups, just ones with long names...
>
> people here may remember me previously having obsessed on the idea.
> I still have no idea of originality or whether it could work. however, 
> now, I have come up with an idea of how to test it.
>
> the idea would be to simulate a "cat" in an effectively text-based world.
> as time goes on various things would show up or things will happen (need 
> to eat, crap, sleep, play with other cats, ...). the test would be if it 
> is able to figure out what to do in these situations, and if it is able to 
> maintain it's mood/state eventually.
>

now, at least the most basic part is done. it adapts to conditions in the 
log and exhibits basic learning-like behavior.

the base algo works thus far, and I have got the ai to be able to maintain 
it's food level (sort of, it is curious though).

now, when there was no "cost" to the action, the ai would refrain from 
eating or eat to maintain itself in an optimal state, thus, most of the time 
was spent here.

I had figured I would add a cost, and it would prevent it from eating a lot 
in a short period of time (and thus go from hungry to trying to eat while 
completely full). this worked in terms of adding a kind of weight that is 
reduced whenever an action occures, and over time returns to 0 (or 
essentially no-effect).

now, once I added a cost, something odd happened:
instead of eating to maintain state and preventing chains of the action, it 
instead caused it to wait until it was fairly high on the hungry end, and 
then eat a whole lot to throw itself slightly into the full range (approx 
50-75% or so, from a point of about hungry 75-100%).

the inhibitions for eating in the mild hungry range (50-75%) are curiously 
high, effectively in general preventing eating there.

in the other ranges (ok, and slight/mild/complete full), there is 
inhibition, but lower than would be expected (however, by watching it still 
looks like it does not really eat in these ranges).

my guess is that it is utilizing the fact that eating while being in the 
75-100% hungry range gives the most reinforcement for the longest period of 
time, wheras eating from the mild hungry area would allow a lot less 
reinforcement for a shorter period of time.

it is still curious though, intuition says it shouldn't figure this one out 
(eg: it should be more eager and go for the quicker rewards of the 
mild-hungry range).


of course, it could just be a bug...

I have not tested with more complex situations/actions, so I don't know 
really.

it is a mystery what would happen if I just added a general "eating cost" 
that works by just punishing the ai a certain amount for eating.


I don't know really though.



3.another idea for a log based ai

yes, this idea for me pops up occasionally.
pardon the cross post, it is only 2 groups, just ones with long names...

people here may remember me previously having obsessed on the idea.
I still have no idea of originality or whether it could work. however, now, 
I have come up with an idea of how to test it.

the idea would be to simulate a "cat" in an effectively text-based world.
as time goes on various things would show up or things will happen (need to 
eat, crap, sleep, play with other cats, ...). the test would be if it is 
able to figure out what to do in these situations, and if it is able to 
maintain it's mood/state eventually.

similarly, testing could help in cleaning up the algo or pointing out what 
is broke. of course, it may also be broke, or it may be useless for games, 
or it may be unoriginal.

I don't really know, and I don't know where to ask.

oh well, whatever, flame if you want...


idea (was originally an email):
---
this is an idea that has actually been beating around my head for years, 
occasionally showing up again for whatever reason (among others related to 
emotion).

now, recently I was reading more of a psychology book, and they were going 
some into memory and learning, and I had noticed that in a general sense, it 
was essentially the same (or a very similar) algo to what I had before 
imagined for a log ai. they went into a little more detail though, showing 
that effectively I only have to really process events previous in time (as 
opposed to 2-way).

this gives me more confidence, the algo could work, and likely would have 
reasonably light performance demands (compared with many other forms of ai 
at least, but still likely a bit more than hard-scripted ai). the advantage 
is that it could be possible to train the behavior of 
characters/monsters/... however, this may-well be ineffective. at least in a 
general sense it could be possible to have them go against each other and 
try to learn how to operate effectively, and maybe occasional human 
involvement could help in making the behaviors "sane" (I am imagining a 
"stick of punishment" here, eg, used for prodding at any ai's that do 
something out of line).

basic behaviors are still necessary to be hard-coded though, others could 
possibly be learned. the important issues are how effective it could be and 
whether it is computationally feasible.

another mystery is whether it would be more or less work than just 
hard-coding it.

I have no idea of any originality here though.
similarly, this is a bit much for me to really try out presently.

rough idea at present:
the world, as opposed to the mass amount of direct state changes and method 
calls common to many game ais, one uses a general form of "event system". 
each general event effectively has some properties, and applies to the 
world. for each character, the events are culled and filtered some, eg, 
location specific events that are invisible are dropped, the event could be 
modified for whether the event applies to self or someone else, ...

ok, so all remaining "relevant" events get recorded to the log, along with 
any recent actions or similar. the log could be trauncated after a little 
while, events too far in the past aren't really relevant.

ok, from the recent events it is necessary to generate a "stimulus".

this would be done by effectively scanning the log, determining the weight 
for each event (diminished with time backwards), effectively the biases for 
the events are multiplied by the weight and added to the stimulus.
this could be made about O(N).

afterwards, a second pass is made effectively adding the stimulus*weight to 
the bias for each event-type. this could help to associate particular types 
of events with each other and particular actions.

this part would likely be O(N^2), and would thus favor shorter logs. it may 
be possible to get this to O(N) as well, eg, by maintaining some state for 
recent event/behavior types and adjusting for each new event (eg: 
multiplying weights by a constant time degeneration and applying current 
stimulus and weights), thus eliminating the need to make a pass over the log 
for each event.

also part of the stimulus would likely be a more abstract "strengthen" or 
"weaken" bias, which would be associated with some events (eg: those used 
for training behaviors), but would be naturally close to neutral for more 
ordinary events (possibly with a slight bias twards weaken to cause the ai 
to forget less common patterns). comparatively the other more usual biases 
(eg: like anger, fear, health, or whatever) would be weaker, but would help 
differentiating between actions (attacking/running from enemies, running 
twards health items in low health situations, ...).

behaviors could be chosen based on the current stimulus. my thought is that 
behaviors would be chosen based on proximity to the current bias for a 
behavior. nearby behaviors will be generated, and may further effect the 
current stimulus. some behaviors could involve generating events, eg, 
generating something like a "pissed off" event could be help in maintaining 
"mood" or similar. limits would probably have to be set on how often events 
could be generated though to avoid trying to generate the same action 
repeatedly. the thought though is that events could have a natural local 
"weaken" bias that degenerates over time (eg: twards some more neutral 
bias). reinforced behaviors would get a more reinforced natural bias.

positive events would be a natural strengthen and negative ones a local 
weaken. this might help in reducing destructive actions and generally 
driving behavior or such.

maybe I am just stupid, or maybe there is no point in writing this, I don't 
know. at least by intuit it seems like it might work...



4.curious inhibition ( another idea for a log based ai)

"cr88192" < XXXX@XXXXX.COM > wrote in message 
news:QnFEd.8783$ XXXX@XXXXX.COM ...
> yes, this idea for me pops up occasionally.
> pardon the cross post, it is only 2 groups, just ones with long names...
>
> people here may remember me previously having obsessed on the idea.
> I still have no idea of originality or whether it could work. however, 
> now, I have come up with an idea of how to test it.
>
> the idea would be to simulate a "cat" in an effectively text-based world.
> as time goes on various things would show up or things will happen (need 
> to eat, crap, sleep, play with other cats, ...). the test would be if it 
> is able to figure out what to do in these situations, and if it is able to 
> maintain it's mood/state eventually.
>

now, at least the most basic part is done. it adapts to conditions in the 
log and exhibits basic learning-like behavior.

the base algo works thus far, and I have got the ai to be able to maintain 
it's food level (sort of, it is curious though).

now, when there was no "cost" to the action, the ai would refrain from 
eating or eat to maintain itself in an optimal state, thus, most of the time 
was spent here.

I had figured I would add a cost, and it would prevent it from eating a lot 
in a short period of time (and thus go from hungry to trying to eat while 
completely full). this worked in terms of adding a kind of weight that is 
reduced whenever an action occures, and over time returns to 0 (or 
essentially no-effect).

now, once I added a cost, something odd happened:
instead of eating to maintain state and preventing chains of the action, it 
instead caused it to wait until it was fairly high on the hungry end, and 
then eat a whole lot to throw itself slightly into the full range (approx 
50-75% or so, from a point of about hungry 75-100%).

the inhibitions for eating in the mild hungry range (50-75%) are curiously 
high, effectively in general preventing eating there.

in the other ranges (ok, and slight/mild/complete full), there is 
inhibition, but lower than would be expected (however, by watching it still 
looks like it does not really eat in these ranges).

my guess is that it is utilizing the fact that eating while being in the 
75-100% hungry range gives the most reinforcement for the longest period of 
time, wheras eating from the mild hungry area would allow a lot less 
reinforcement for a shorter period of time.

it is still curious though, intuition says it shouldn't figure this one out 
(eg: it should be more eager and go for the quicker rewards of the 
mild-hungry range).


of course, it could just be a bug...

I have not tested with more complex situations/actions, so I don't know 
really.

it is a mystery what would happen if I just added a general "eating cost" 
that works by just punishing the ai a certain amount for eating.


I don't know really though.



5.Other 4-player simultaneous paddle game ideas (was: 4-player Tetris-style game idea)

Bill Kendrick < XXXX@XXXXX.COM > wrote:
> It occured to me, one could do a four-player Tetris-style game
> on the Atari using two pairs of paddles.

Some other 4-player, simultaneous-on-the-same-computer game
ideas:

  * Multiplayer space battle ("Space War") or "Asteroids"
  * Multiplayer racing (3D like Pole Position, Enduro or
    Night Driving; or 2D like the GameLink game I saw,
    "Speed-Up!", or "Championship Rally" on the Lynx)
  * Multiplayer "Lunar Lander" (who can land first?)
  * Multiplayer shoot-em-up (like "Space Invaders")
    (I also realize there was "Demons to Diamonds", which
    was 2-players facing each other... this would be
    four players somehow sharing the ground at the bottom)

Heck, one could even make a jump-and-run game
(not unlike a simplified version of "New Super Mario Bros. Wii"),
since... for the most part, you use left/right/jump.


One I was thinking about that I'm pretty sure wouldn't
work would be a 4-player simultaneous "Combat" / "Tank"
game, since you need controls for both forward/back and
firing.

I could probably prototype a bunch of these in TurboBASIC XL.
("POKE HPOSP0+PLY,PADDLE(PLY)" ;) )

-- 
-bill!
Sent from my computer

6. an original game idea sales pitch idea

7. CFP - AI, Games and Narrative AI

8. Game AI Poll: The 2005 GDC AI Roundtable Format



Return to IBM PC

 

Who is online

Users browsing this forum: No registered users and 45 guest