A challenge is called "possible" when it admits at least one resolution. Whether it is playful or non-play activities, whether it is thoughtfully designed by another agent (e.g. a test), or emerges spontaneously (e.g. crossing a river, convincing someone).
For a designed challenge, the designer needs to know the nature of the challenge he is proposing and the way(s) to solve it.
It is then necessary to divide the designed challenge into two categories:
The challenge with an evaluative vocation.
The challenge with a playful vocation.
Careful, we are talking about vocation here. It is possible that a challenge with an evaluative vocation may be fun and vice versa. We are talking here about the reason why it was designed.
For the Challenge with an evaluative vocation, the goal is to evaluate one or more skills of the participant(s). In a simplified model, the challenge is a black box: we do not know how the subject acted, but we know the result of the challenge, it is important for the designer to make sure that the challenge does not offer a hidden solution, a solution not envisaged by the designer, which would allow the subject to solve the test in another way, and therefore without necessarily using the competencies evaluated. To avoid this kind of situation, it is important to have a completely controllable environment and to have a precise test that excludes unwanted resolutions.
Ask yourself: a subject who completes a French-language test not because he or she solves the problem of each question, but because he or she has found the algorithm to find the answer to all the questions. Does this subject deserve to pass the test?
We must then dissociate the test in the idea and in the application: in the idea, the purpose of this test was to verify the subject's ability to speak French. But in reality, the designer designed a test, and the statement was to solve this test, which is far from being the same thing.
It is then important to consider that a test never assesses anything other than the subject's ability to solve the test.
The holder of the title of Chess Champion is nothing more than the subject who has passed the test to obtain that title, and although the test involves playing chess, it is fallacious to assume that this implies that the winner is the best chess player. Thus, the creator of a chess championship, if his or her goal is to have the best chess player emerge victoriously, he or she must first define what the best chess player is, and then create a system to find him or her. This may involve an astronomical number of variables, such as the number of games, the nature of the tournament, is it a system of elimination, pool, etc.?
Note that we are not even talking about a chess game here, but about the nature of the tournament. We could add to this the fact if the game is timed, who starts to play, etc.
It seemed important to me to mention the Challenges with evaluative vocations. This is indeed a prerequisite for the game that is going to follow: the Playful Challenges.
Playful Challenges are widely used in games. We will simplify the problem by reducing the intention of this kind of challenge to the fun of the player, and nothing else. You could argue that you know of fun-oriented challenges that go deeper than fun, but that's not the point here.
In fun challenges, the feeling of happiness can come from the action of solving the challenge but also from the feeling of accomplishment at the end of the challenge. It is important to note here that we are trying to create a feeling, an impression, with the subject.
Let us then ask ourselves these questions: How can we maximize the subject's happiness? How can I make the playful challenge I design appeal to as many people as possible?
On a personal basis, I will advise any Challenge creator to list all the skills required or that may be required to solve a Challenge.
In order to support this list of skills, I also advise creators to rank and quantify the importance of each of these skills to the success of the challenge. The sum of the value assigned to each skill should be equal to 100.
In the context of a game where factors external to the player have an influence on his success, it can be judicious to position these factors within the scale (e.g. Randomness).
We will call this process the Qualitative Competency Scale (QCS).
Please understand that this scale is for system analysis purposes, and is in no way a scale representing the quality of a game.
In addition, the QCS is not usable to know the level of skill required to succeed in the challenge. Two Challenges may require 100% Dexterity, but require radically different levels of dexterity.
Example: QCS from a racing game with items.
Drive Skill: 20%
Item Gestion: 20%
Map Knowledge: 20%
Randomness: 30%
Meta pick: 10%
Applications of QCS
When creating a game with a defined audience, it can be interesting to use QCS to ensure that the skills required to complete the challenge are in line with the audience's expectations. Fans of strategy games will probably expect a game to have its outcome determined primarily by Strategy, not the speed of execution, for example.
When creating a game inspired by similar games, it may be interesting to use the QCS to analyze how these games are challenging, what differentiates them, and how to position oneself against these competitors.
How to build your QCS?
Step 1. Identification of competencies
List all of the skills that can be used to complete your Challenge. If you are doing the QCS of a Challenge group, do not hesitate to first do the QCS of all the challenges of this group (ex: triathlon). To these skills, add all the ancillary elements (ex: chance).
Step 2. Elimination of prerequisites and ancillary elements
Sometimes we are tempted to put in the QCS essential prerequisites, for example, competence: to be endowed with the sense of Sight. I advise you, in most cases, to eliminate these prerequisites. Indeed, I advise you, except exception, to manage the accessibility of your game in a totally differentiated way, this will allow you to find real efficient solutions. Also, some ancillary elements are not necessarily consistent to be rated, although they are important in the chances of succeeding in the challenge. I don't have a magic rule to detect which ones to remove, it depends on your Challenge, and what you plan to do with it.
Step 3. Go from the list to the scale.
This is the trickiest part: how do you give a numerical value to each of these skills? The first thing that is important to note, I repeat, is that you should not rate them according to the difficulty required. Just because inventory management is very difficult in your game doesn't mean that this skill should be an important part of it. The question you need to ask yourself is: is it possible to succeed in the game without inventory management? How much better to manage inventory perfectly increases the chances of success?
If you have trouble giving values to your skills, maybe you should start by prioritizing them: what is the most useful skill for a mario kart player: knowing the terrain or managing your items?
Once you have ranked them, give each skill a share until you reach 100%, then adjust.
I also believe in the use of Data Analysis to get a more accurate QCS: you could for example correlate the percentage of accuracy of an FPS player with his victory rate, take a closer look at the importance of the so-called "meta" choices in a competitive game with their chances of success, the management of potion to defeat a boss in an rpg, etc.
For example, I'm working on a MOBA, and I want to identify the importance of :
- Knowledge of the character being played
- Randomness
- The Strategy
- The choice of items that can be purchased
- Other
In order to know the importance of these 5 criteria, I first rank them in order (instinctively), then a value, in such a way that the sum of the 5 makes 100.
Note that the choice of my 5 criteria is perhaps not good, or not precise enough: do I consider my allies in the category "chance" or "other"? Shouldn't I create a "team management" category?
For the moment, I decide to start with these 5 criteria. I have quantified them. In order to check their validity, I implement a simple system: I correlate the victory rate of the played character with the number of games the player has played on this character. If I realize that the number of games has no influence on the victory rate, I can probably decide to remove this category.
Step 4. Subdivide, or group
"Strategy" is probably not a skill to be scored in the CQE of a Strategy game. Indeed, the term is too vague, and implies many subdivisions. Feel free to subdivide a skill that is too broad into several smaller ones if you think it makes sense. Note that this is not necessarily the case. You might want to look for the influence of dexterity within your game, in which case you could create a CQE with two categories: "Dexterity" and "Other".
The QCS is not perceptible to the Player.
If you ask a child to establish the QCS of the battle game, chances are they will tell you that the game is 100% determined by chance, which is not a variable that can be influenced by the player, and yet it is.
The example seems simplistic, but assume that it is true in the vast majority of cases.
What this statement implies is that the Challenge you are selling will not be perceived without a filter by the player. Use this rule to your advantage: instead of being disappointed that the player doesn't understand that your game is super skill-based, prove that it is! And by extension, you can even deceive the player about the importance of the skills used.
Ask the player to
Following playtests, don't hesitate to ask the player how important each skill is in the game. You can also ask him/her which skills he/she used to overcome the challenge. Note that these are very different questions. Indeed, a player may readily agree that certain skills are very important within the game, without having used them at all (for example, a Winston player in Overwatch will find it hard to deny that aiming has an important part in the game, even though it is not used at all).
QCS, QCSs
It is sometimes relevant to create several QCSs, some are additive (several QCS for several Challenges necessary for the subject to complete your Challenge), some are optional (ex: one QCS for DPSvs players one QCS for Tank players).
Intention QCS, Actual QCS
Differentiate between two QCS: the one you would like for your Challenge, called the Intention QCS, and the Actual QCS, which is the analysis of your current challenge.
Thus, if you have to decide on a modification within your challenge, ask yourself the question: does it allow you to get closer to the QCSof intention?
If the answer is no, then you may need to think about the coherence of this decision or the coherence of your QCS of intent. It is okay to revisit your QCS of intent, but it is important to understand why you are doing so.
For example, I want to make a racing game, like Mario Kart. In order to differentiate it from the competition, I want to make object management very important in the player's success. I then create the QCS of intention for my future game:
Driving: 20%.
Target: 10%.
Timing : 10%.
Object management: 25%.
Chance : 10%.
Knowledge of the circuit: 25%.
Several months later, during the production of the same game, the players complain: they find that the games are not interesting enough, they all end in the same way: one player takes advantage at the beginning, and nobody can catch up with him anymore.
I trace the current effective QCS of my game, using Playtests, analysis tools, and my own analysis. Faced with complaints from playtesters, it seems very important here to check if the player who takes the advantage is always the same, and is he stronger than the other players? Or maybe it's just chance that he has a lead at the beginning, and my system doesn't allow other players to catch up.
Here is the actual QCS:
Driving: 15%.
Aiming 10%.
Timing 15%.
Object management: 25%.
Chance : 5%.
Knowledge of the Circuit : 30%
Finally, I realize that the players who win at my game are just better, and that my actual QCS is not that far from my intended QCS (especially since the object management is perfectly calibrated!). Now that I have the necessary tools, I can start asking myself the question: should I go back to my QCS of intent, or should I keep it and find solutions to avoid problems?
Random Parallel Learning
Theory borrowed from Tom Cadwell in one of his lectures. In order to integrate this concept within the QCS system, but also because I would not dare to speak on behalf of Tom Cadwell, we will use the term "Learning and Using Parallel Competencies".
Learning and Using Parallel Skills
It is the fact that a Challenge can be solved through the use of different skills in parallel. Thus, within the framework of a suite of challenges, it implies that the subject can progress in different ways to overcome these challenges.
Let's take the example of League of Legend: a player can progress to ELO through his mastery of his champion, his communication with his team, his knowledge of the game, etc. Thus, the player can express himself and improve in the skills he prefers to use in order to succeed in the challenge.
We can then speak of a permissive QCS.
Advantages of a permissive QCS:
Allowing players to improve even if they have reached their maximum in one of the skills.
Make the game fun for more players.
Allow a player to have fun in different ways within the game.
Permissive QCS seems to be a component of a game's success, in fact, MMORPGs, Battle Royale and MOBA all have very permissive QCS.
QCS: Does it explain a player's attraction to gambling?
A strategy gamer won't necessarily melt for your game even if it is mainly determined by strategy.
I don't think QCS alone explains a player's interest in a game. On the other hand, it is, in my opinion, not to be neglected. Indeed, many thinkers argue that Challenge resolution is engaging for the player, yet taxonomies that attempt to rank players by their game appeal spend little and brief time on the kinds of challenges they must solve.
Comentarios