If Muse isn’t creating games, then what is it creating - and how should we respond?
That first unveiling also came with further issues of its own.
All in all, it leads to what is on the surface an utterly bizarre situation.
And the early results of this research are something that almost everyone in the industry seems to totally despise.
But what is actually going on?
Can the research team behind it offer more credible answers?
And how, exactly, ought we react to all of it?
What is Muse?
A key point to emphasise here is, as Cook says, that “it’s generating images.
It’s not generating code or anything like that.”
Is Muse generating video games?
All this notion of “playing” Muse raises one particularly important question: is Muse generating video games?
The emphatic answer from both Hofmann and Cook is, essentially: no.
That’s just a video that has some interaction with it."
And the video was kind of conditioned on keyboard inputs."
Whether that does or doesn’t disqualify it from actually being a game is up for debate.
As Cook himself says, “it gets a bit philosophical at some point.”
Does it exist on your hardware more than a streamed one from a server?
Where does it break?
What doesn’t work, what works?"
They are, by every measure of playability, unplayable.
That, technically, is a video game.
But also still not a particularly good one.
“Gameplay ideation”, preservation, or something else: who and what is Muse for?
Who, or what, is Muse actually for?
And one that Phil, for example, was particularly excited about."
“I’m certain it won’t be the only one.
But I see that.
I do see it as something that could be exciting to explore along with other software areas.”
Cook is unequivocal here, meanwhile.
What stuff is maybe not seen at all?
Maybe there’s secrets no one’s ever found.
It’s still useful that we built it.
But it would be great if we also had the original."
“There are philosophical questions around: can models be creative?”
Hofmann said, when I asked her to clarify.
“And I’m quite firmly on the side that they are not.
I’ve seen a lot of confusion around that in the literature.
When does that potential become actual, usable reality for some kind of developer-friendly jump pad-testing tool?
Whether or not this might actually be useful to developers is another issue, however.
I thought that was really important and we need more of that stuff.”
Isn’t this all being done back to front?
In many cases it feels entirely justified.
Naturally, that same criticism was levied again at Muse after its reveal of the Quake 2 demo.
Sos Sosowski, an indie developer, issued one of the most widely-shared putdowns on BlueSky with exactly this.
“That’s very on-track with [the] AI trend,” he wrote.
“A solution looking for a problem.
It’s yet another in a series of “reveals” that is bug-ridden and broken.”
How does the actual timeline of research unfold here?
Whose idea was Muse?
How did it begin, or change over time?
The one exception to that?
He has a few suggestions as to how this initial research might have come to be.
Where do they see things going?"
The research team itself then decides, “within the team”, how to direct its research.
What are the next frontiers that we can explore?"
Notably, Muse is actually one of two research projects being worked on in parallel by the team.
It does seem to be a shareholders thing."
To Cook, the way Muse has been presented in general is a concern.
I really felt like they were kind of hung out to dry in some ways."
“Part of it comes from the academic research community.
Let’s put it out.'”
How do we make that very clear?
That said, Hofmann concedes that the downside of going this early is the impact on the conversation.
The technology is not ready.
As far as broader ethical concerns go, this is far from the last of it.
And these two things are also really important for robots."
There doesn’t seem to be a way around this."
“I think it’s justified for a number of reasons,” he says.
And I think that’s completely understandable."
The industry has, of course, been going through a prolonged period of unprecedented layoffs and general uncertainty.
It’s not just about the morals or the ethics or anything like that.
This is a thing they care about."
A final, forgotten question for Cook is also the most important.
“Not even: does it make games better; but do people want it?
“So with generative AI, the question should just be: do we want this?
We don’t have to do anything.
We don’t even have to make games if we don’t want to.
And so players should think more about: what do they really want from the future of games?
Because they can want anything.
They don’t just have to want the things that they see in tech demos, at E3.
They can build whatever future industry they want.”