We also briefly discussed this in Games Master, if only to discover how wide and diverse the range of perspectives are. I feel it misrepresents the subject to talk about a “literal definition”, and to explicitly include “win conditions”. Because there are multiple attempts of a definition, and many do not include win conditions.
https://en.wikipedia.org/wiki/Game
One such example definition:
“To play a game is to engage in activity directed toward bringing about a specific state of affairs, using only means permitted by specific rules, where the means permitted by the rules are more limited in scope than they would be in the absence of the rules, and where the sole reason for accepting such limitation is to make possible such activity.” (Bernard Suits)[14]
You seem to refer to Chris Crawford’s definition, which is in part:
If no goals are associated with a plaything, it is a toy. (Crawford notes that by his definition, (a) a toy can become a game element if the player makes up rules, and (b) The Sims and SimCity are toys, not games.) If it has goals, a plaything is a challenge.
Explicitly calling SimCity “not a game” is purely academic talk, detached from reality. For everyone else, SimCity is clearly a game. If you want to buy it, you look for games, not toys. I feel definitions are questionable which define something to be not what everybody thinks it is.
Was Minecraft not a game until it included “The End”? I loved playing Minecraft, but I rarely cared about The End, even after it was included. When a player cannot tell the difference between a version of a game which includes a win condition, and a version which does not, how can the existence of that condition be a decisive factor?
If we widen the scope to include any game, not just video games, we can also have a look at popular children’s games like https://en.wikipedia.org/wiki/Word_Association. My theater group loves to play win-free games as a warmup practice.
From my point of view, win conditions are a common characteristic of games, but not necessary or defining. Coming up with a short definition which captures all games and excludes all non-games is surprisingly hard.
Me in tech support.
Customer calls: “Internet is not working!!”
Me: “Router lights status?”
Customer: “Can’t tell.”
Me: “Why?”
Customer: “Router still in box.”
Me: “…?”
Me (pretends it was just an error of communication): “Can you please describe the lights on your router?”
Customer: “I can’t. It’s still in the packaging. The box is on my table.”
Me: “…??? … You … need at least electricity to power this device.”
Customer spirals into rage and madness: “I ordered wireless internet!! I won’t plug any cables in! I did not want any wires!!!”
Then thanks for pointing that out. It seems I’m pretty much unaware of this. Maybe we watched different videos on different topics. I found her takes on physics generally well reasoned, and do remember her marking her opinion and at least sketching differing opinions.
This is just to tell you where I’m coming from, I don’t want to argue. If you’re right and I am unaware, I want to learn. So if you like to point out an example or two, I’d be happy to look into it.
Thank you very much for the effort! I also searched for text or video, but found none.
I understand now what you previously meant, streaming code via TV.
That’s what it sounded like when you loaded a program and it’s exactly what they’d play on the TV so you could create your tape.
Now I have a new confusion: Why would they let the speaker play the bits being processed? It surely was technically possible to load a program into memory without sending anything to the speaker. Or wasn’t it, and it was a technical necessity? Or was it an artistic choice?
While all of what you say is true, we simply cannot teach everything since there is just too much knowledge and too little time in a human life.
And not everyone is equally interested or capable in learning everything.
This is necessarily the world we live in, even without adding capitalism or any evil intentions to the mix. Any education you can get or offer can only be a more or less well selected subset of the knowledge available.
In this light, I don’t see it as a dramatic loss to remove educational emphasis from skills which can easily be replaced with modern technology. It would make sense to shift the focus to teaching a critical usage of said technology.
Yes, within limits. Due to the information explosion, it became impossible to learn “everything”. We need to make choices, prioritize.
How does your voting behaviour suffer because you lack understanding about how exactly potentiometers work, or how to express historic events in modern dance?
Both have inherent worth, but not the same for each person and context. We luckily live in a society of labor division. Not everyone has to know or like everything. While I absolutely admire science, not everyone has to be a scientist.
Because there is more knowledge available than we can ever teach a single person, it is entirely possible to spend a lifetime learning things with no use informing your ballot decision. I would much rather have students optimize some parts of their education with AI, to free up capacity for other important subjects which may seem less related to their discipline. For example, many of my fellow computer science students were completely unaware how it could be ethically questionable to develop pathfinding algorithms for military helicopters.
you’re assuming the knowledge will never be used, or that we should avoid teaching things that are unlikely to be used.
Not exactly. What I meant to say is: Some students will never use some of the knowledge they were taught. In the age of information explosion, there is practically unlimited knowledge ‘available’. What part of this knowledge should be taught to students? For each bit of knowledge, we can make your hypothetic argument: It might become useful in the future, an entire important branch of science might be built on top of it.
So this on it’s own is not an argument. We need to argue why this particular skill or knowledge deserves the attention and focus to be studied. There is not enough time to teach everything. Which in turn can be used as an argument to more computer assisted learning and teaching. For example, I found ChatGPT useful to explore topics. I would not have used it to cheat in exams, but probably to prepare for them.
the choice of two doctors, one of whome passed using AI, and the other passed a more traditional assessment. Which doctor would you choose and why? Surely the latter, since they would have also passed with AI, but the one without AI might not have passed the more traditional route due to a lack of knowledge.
Good point, but it depends on context. You assume the traditional doc would have passed with AI, but that is questionable. These are complex tools with often counterintuitive behaviour. They need to be studied and approached critically to be used well. For example, the traditional doc might not have spotted the AI hallucinating, because she wasn’t aware of that possibility.
Further, it depends on their work environment. Do they treat patients with, or without AI? If the doc is integrated in a team of both human and artificial colleagues, I certainly would prefer the doc who practiced these working conditions, who proved in exams they can deliver the expected results this way.
In an environment where knowledge for the sake of knowledge is not prised
I feel we left these lands in Europe when diplomas were abandoned for the bachelor/master system, 20 years ago. Academic education is streamlined, tailored to the needs of the industry. You can take a scientific route, but most students don’t. The academica which you describe as if it was threatened by something new might exist, but it lives along a more functional academia where people learn things to apply them in our current reality.
It’s quite a hot take to paint things like the antivax movement on academic education. For example, I question wether the people proposing and falling for these ‘ideas’ are academics in the first place.
Personally, I like learning knowledge for the sake of knowledge. But I need time and freedom to do so. When I was studying computer science with an overloaded schedule, my interest in toying with ideas and diving into backgrounds was extremely limited. I also was expected to finish in an unreasonably short amount of time. If I could have sped up some of the more tedious parts of the studies with the help of AI, this could have freed up resources and interest for the sake of knowledge.
let’s rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.
Rather than moving us forward, such a system would lock us into place and likely create out of date workers.
I found this too generalizing. Yes, most people only ever need and use productivity skills in their worklife. They do no fundamental research. Wether their education was this or that way has no effect on the advancement of science in general, because these people don’t do science in their career.
Different people with different goals will do science, and for them an appropriate education makes sense. It also makes sense to have everything in between.
I don’t see how it helps the humanities and other sciences to teach skills which are never used. Or how it helps to teach a practice which no one applies in practice. How is it a threat to education when someone uses a new tool intelligently, so they can pass academic education exams? How does that make them any less valuable for working in that field? Assuming the exam reflects what working in that field actually requires.
I think we can also spin an argument in the opposite direction: More automation in education frees the students to explore side ideas, to actually study the field.
I guess you’re right, but find this a very interesting point nevertheless.
How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?
For the sake of the comparison, we should talk about the presumed intelligence of other people, not our (“my”) own.
That very much depends on what you define as “intelligent”. We lack a clear definition.
I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.
And still, we can already have more intelligent conversations with them than with most humans.
It’s not a fair comparison though. It’s as if we’d compare the language region of a toddler with a complete brain of an adult. Let’s see what the next few years bring.
I’m not making that point, just mentioning it can be made on an academic level: There’s a paper about the surprising emergent capabilities of ChatGPT 4.0, titled “Sparks of AGI”.
Sorry, I could have been more clear. I did not mean to equate current LLMs with human brains. The question was rather:
Can’t we describe the working of (other) human brains in a very similar fashion as you did before? Or where exactly is the difference which sets us apart?
world models, we have imagination, a physical and metaphysical simulation of the world around us
AIs which can and need to interact with the physical world have those, too. Naturally, an AI which is restricted to language has much less necessity and opportunity to develop these, much like our brain area for smell is probably not so good at estimating velocities and catching a ball.
I think your approach of demystifying technology is valid and worthwhile. I’m just not sure if it does what you maybe think it does; highlight the difference to our intelligence.
Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”
I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza’s worldview, with a few errors.
I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.
The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.
We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn’t even mention emergent capabilities, or quote any of the many contrary scientific views.
Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it’s an uncritical parroting of what people like to think anyways to feel supreme and secure.
Yes, I feel you.
And yes, that’s how it is. It’s an insanely complex industry if you really want to understand how things work.
Which you don’t need to get things done.
Which you still can if you really want, if you’re willing to invest the time and energy to study it thoroughly for many years if not decades.
But even then, chances are you’ll be touching libraries, concepts or technologies which you did not study in-depth yet. I think you need to be both aware and tolerant of limited knowledge, and willing to learn continuously.