Computer chess is a computer architecture game that includes hardware and software capable of playing chess independently without human guidance. Computer chess acts as a solo entertainment (allowing players to practice and get better when there is no strong human opponent), as a chess analysis tool, for computer chess competitions, and as a research to provide insight into human cognition.
The current chess machine is capable of defeating even the strongest human players in normal conditions. Whether the calculation can solve the chess remains an open question.
Video Computer chess
Availability
Computers playing chess are now accessible to the average consumer. From the mid-1970s to the present day, specialized chess computers were available for purchase. There are many chess machines like Stockfish, Crafty, Fruit and GNU Chess that can be downloaded from the Internet for free. This machine can play games that, when run on up-to-date personal computers, can beat most of the major players in tournament conditions. Top programs like Shredder or Fritz or open source programs, Stockfish, have surpassed the world's caliber players on blitz and short time control. In October 2008, Rybka topped the list of CCRL, CEGT, CSS, SSDF, and WBEC ranks and has won many official computer chess tournaments recently such as CCT 8 and 9, the Dutch Open Computer Championship 2006, the 16th IPCCC , and the 15th World Computer Chess Championship. On February 3, 2016, Stockfish is the top-ranked chess program on the IPON rank list.
Computer Chess Ranking
CCRL (Computer Chess Rating Lists) is an organization that tests the power of computer chess machines by playing programs against each other. CCRL was founded in 2006 by Graham Banks, Ray Banks, Sarah Bird, Kirill Kryukov and Charles Smith, and in June 2012 its members are Graham Banks, Ray Banks (who only participates in Chess960, or Fischer Random Chess), Shaun Brewer, Adam Hair , Aser Huerga, Kirill Kryukov, Denis Mendoza, Charles Smith, and Gabor Szots.
The organization runs three different lists: 40/40 (40 minutes for every 40 moves played), 40/4 (4 minutes for every 40 moves played), and 40/4 FRC (same time control but Chess960). Pondering (or permanent brain) is switched off and timings are adjusted to the AMD64 X2 4600 (2.4 GHz) CPU using Crafty 19.17 BH as a benchmark. A common and neutral opening book (as opposed to an own machine book) up to 12 steps into a game with 4 or 5 male tablebases.
Maps Computer chess
Computer versus human
Using the "edge-and-means" heuristics of a human chess player can intuitively determine optimal results and how to achieve them regardless of the amount of movement required, but the computer must be systematic in its analysis. Most players agree that seeing at least five forward moves (five layers) when needed to play well. The normal tournament rules give each player an average of three minutes per movement. On average there are over 30 legal movements per chess position, so the computer should check for four million possibilities to look ahead ten layers (five full motions); which can check a million positions per second for more than 30 years.
After discovering rejection screening - an alpha-beta pruning app to optimize movement evaluation - in 1957, the team at Carnegie Mellon University estimated that the computer would defeat the world's human champion in 1967. It did not anticipate the difficulty in determining the correct sequence to evaluate the branch. Researchers worked to improve the program's ability to identify heuristic killers, a very high step to be re-examined while evaluating other branches, but in the 1970s, most top chess players believed that computers would not soon be able to play at Master level. In 1968, the International Master David Levy made a well-known bet that no chess computer could defeat him in ten years, and in 1976 Senior Master and professor of psychology Eliot Hearst from Indiana University wrote that "the only way a computer program can now winning one game against the master player is for the master, maybe in a hangover when playing 50 games simultaneously, to make a mistake once a year ".
In the late 1970s, chess programs suddenly began to beat the top human players. Years of Hearst's statement, University of Northwestern 4.5 Chess at Class B Chess Championship Paul Masson American became the first to win a human tournament. Levy won his stakes in 1978 by beating Chess 4.7, but achieved the first computer victory against a Master class player at the tournament level by winning one of six games. In 1980 Belle began to beat the Masters often. In 1982 two programs playing at Masters level and three programs were slightly weaker.
A sudden improvement without the shocking theoretical breakthrough of humans, who did not expect that Belle's ability to check 100,000 positions per second - about eight layers - would be enough. The Spracklens, the creator of Sargon's successful microcomputer program, estimates that 90% improvement comes from faster evaluation speeds and only 10% of upgraded evaluations. New Scientist declared in 1982 that the computer "playing terrible chess... awkward, inefficient, spreading, and just ugly ugly", but humans lose with them by making " terrible errors, astonishing aberrations, unintelligible omissions, gross miscalculations, and the like "far more often than they realize; "In short, computers win primarily through their ability to discover and exploit miscalculations in human initiatives".
In 1982, the microcomputer chess program was able to evaluate up to 1,500 movements per second and as strong as the mainframe chess program five years earlier, beating almost all players. Although only able to see forward one or two more parts than their debut in the mid-1970s, this improved their game more than experts had predicted; seemingly small improvements "seem to have allowed to cross the psychological threshold, after which the rich harvest of human error becomes accessible", New Scientist writes. When reviewing SPOC in 1984, BYTE wrote that "Computers - mainframes, minis, and micros - tend to play ugly, unskillful chess," but noted Robert Byrne's statement that "Their tactics are more error-free than the average human player". The magazine described SPOC as a "sophisticated chess program" for IBM PCs with "very high" game levels, and estimates its USCF rating is 1700 (Class B).
At the 1982 North American Computer Chess Championship, Monroe Newborn estimated that the chess program could become a world champion in five years; tournament director and International Master Michael Valvo predict ten years; Spracklens estimates 15; Ken Thompson estimates more than 20; and others predicted that it would never happen. The most widely held opinion, however, states that it will happen around 2000. In 1989, Levy was defeated by Deep Thought in an exhibition match. Deep thinking, however, is still very far below the World Championship Level, as then world champion Garry Kasparov showed in two strong victories in 1989. It was not until the 1996 game with the IBM Deep Blue that Kasparov lost his first game to the computer in time of the tournament controls in Deep Blue - Kasparov, 1996, Game 1. This game is, in fact, the first time a powerful world champion has lost a computer using regular time control. However, Kasparov rejoined to win three and draw two of the remaining five games, for a convincing victory.
In May 1997, the latest version of Deep Blue beat Kasparov 3ýý in the return match. A documentary film especially about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine. IBM stores the event's website.
By increasing processing power and improving the evaluation function, chess programs run on commercially available workstations begin to compete with top players. In 1998, Rebel 10 beat Viswanathan Anand, who at the time was ranked second in the world, with a score of 5-3. But most of those games are not played on normal time controls. Of the eight games, four flash games (five minutes plus five seconds Fischer delay (see time control) for each step); This rebel won 3-1. Two is a semi-blitz game (fifteen minutes for each side) that Rebel wins as well (1½½½). Finally, two matches are played as regular tournaments (forty moves in two hours, an hour of sudden death); this is Anand who won ý -1ý. In fast games, computers play better than humans, but on classical time controls - where player ratings are determined - the advantages are not so clear.
In early 2000, commercially available programs such as Junior and Fritz were able to attract matches against former world champion Garry Kasparov and classical world champion Vladimir Kramnik.
In October 2002, Vladimir Kramnik and Deep Fritz competed in an eight-game Brain match in Bahrain, which ended in the draw. Kramnik won 2nd and 3rd games with "conventional" anti-computer tactics - playing conservatively for the long-term benefits a computer can not see in its game tree search. Fritz, however, won game 5 after a big mistake by Kramnik. Game 6 is described by tournament commentators as "spectacular." Kramniks, in a better position in the early stages, try a piece of sacrifice to achieve a strong tactical attack, a strategy known to be particularly at risk to computers that are in their strongest defense against such attacks. True to form, Fritz finds watertight defenses and Kramnik attacks subside leave him in a bad position. Kramnik withdrew from the game, believing that his position was lost. However, post-game human and computer analysis has shown that Fritz's program is unlikely to be able to force victory and Kramnik effectively sacrifices the position withdrawn. The last two games are a draw. Given the circumstances, most commentators still rate Kramnik as a stronger player in the match.
In January 2003, Garry Kasparov played Junior, another chess computer program, in New York City. The match ended 3-3.
In November 2003, Garry Kasparov played X3D Fritz. The match ended 2-2.
In 2005, Hydra, a custom chess computer with custom hardware and sixty-four processors and also the winner of IPCCC 14 in 2005, beat the seventh Michael Adams 5ý in six games (although Adams preparations are much less). thorough from Kramnik for the 2002 series).
In November-December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer wins; match ends 2-4. Kramniks can see the computer opening book. In the first five games, Kramnik directs the game to a unique "anti-computer" contest. He lost one game (face the pair in one), and drew the next four. In the last game, in an attempt to draw the match, Kramnik played a more aggressive and devastated Sicilian Defense.
There has been speculation that interest in human-computer chess competition will decline as a result of the Kramnik-Deep Fritz match of 2006. According to Newborn, for example, "science is over".
The human-computer chess match shows the best computer system beating the human chess champions in the late 1990s. For 40 years before that, the trend was that the best machines earned about 40 points per year in Elo ranking while the best humans earned only about 2 points per year. The highest rank earned by computers in human competitions is the USCF Deep Thought rating of 2551 in 1988 and FIDE no longer accepts human-computer results in their list of ratings. Special Elo engine specific engines have been made for ranking engines, but the figures, although similar in appearance, should not be compared directly. In 2016, the Swedish Chess Computer Association rated the Komodo computer program in 3361.
Chess machine keeps improving. In 2009, chess machines running on slower hardware have reached the grandmaster level. A mobile phone wins a category 6 tournament with a performance rating of 2898: the Hiarcs 13 chess machine runs inside Pocket Fritz 4 on HTC Touch HD phones winning the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on 4 August 14, 2009. Pocket Fritz 4 looks for less than 20,000 positions per second. This is different from supercomputers like Deep Blue looking for 200 million positions per second.
Advanced Chess is a chess form developed in 1998 by Kasparov in which humans play against other humans, and both have access to computers to increase their power. The resulting "advanced" player was proposed by Kasparov to be stronger than human or computer, though this has not been proven. In 2017, a computer machine wins the freestyle Ultimate Challenge tournament. It denies the claim that the "proficient" player as advocated by Kasparov is stronger than the computer alone.
Current players tend to treat chess machines as analytical tools rather than opponents.
Implementation problem
The developers of computer systems that play chess must decide on a number of fundamental implementation issues. These include:
- Representation of boards - how single positions are represented in data structures;
- Search techniques - how to identify possible moves and choose the most promising for further examination;
- Leaf evaluation - how to evaluate board position values, if no further search will be performed from that position.
Computer chess programs typically support a number of de facto general standards . Almost all today's programs can read and write mobile games as Portable Game Notation (PGN), and can read and write individual positions as Forsyth-Edwards Notation (FEN). Older chess programs often only understand the long algebraic notation, but current users expect the chess program to understand standard algebra chess notation.
Most computer chess programs are divided into machine (which calculates the best steps given current position) and user interface . Most machines are separate programs from the user interface, and the two parts communicate with each other using public communication protocols. The most popular protocol is Chess Engine Communication Protocol (CECP). Another alternative open chess communication protocol is the Universal Chess Interface (UCI). By dividing the chess program into two parts, the developer can only write the user interface, or just the machine, without the need to write both parts of the program. (See also List of chess machines.)
Implementers also need to decide whether they will use other endgame or optimization databases, and often apply ordinary de facto chess standards.
Board Representation
The data structure used to represent each chess position is the key to performance displacement and position evaluation. Methods include chunks stored in arrays ("mailbox" and "0x88"), cutting positions are stored in lists ("cut lists"), bit-set collections for bitboards, and huffman code positions for compact term storage long.
Search techniques
The first paper on this issue was written by Claude Shannon in 1950. He estimated two possible search strategies to use, which he labeled "Type A" and "Type B", before someone programmed the computer to play chess.
The Type A program will use the "brute force" approach, checking every possible position for fixed movements using the minimax algorithm. Shannon believes this is not practical for two reasons.
First, with about thirty possible steps in a typical real-life position, he hopes that searching for about 10 9 positions involved in finding the next three steps for both sides (six layers) will eat took about sixteen minutes, even in the case of "very optimistic" that the chess computer evaluates a million positions every second. (It took about forty years to reach this pace.)
Secondly, ignoring the problem of tranquility, trying only to evaluate the position at the end of the interchange or other important step sequence ('line'). He hopes that adapting type A to overcome this will greatly increase the number of positions that need to be seen and slow the program even further.
Instead of wasting processing power checking for bad or trivial movements, Shannon suggested that the "type B" program would use two improvements:
- Use quiescence search.
- Just see some nice moves for each position.
This will allow them to look further ahead ('deeper') on the most significant line in a reasonable time. The time test has spawned the first approach; all modern programs use quiescence status search before evaluating positions. The second approach (now called forward pruning ) has been canceled for search extensions.
Adriaan de Groot interviewed a number of chess players with varying strengths, and concluded that both the master and the novice saw about forty to fifty positions before deciding which step to play. What makes the former player a lot better is they use the pattern recognition skills built on the experience. This allows them to examine some lines deeper than others by simply not considering the steps they can perceive as poor.
Another proof for this case is the way a good human performer finds it much easier to remember the position of the original chess game, breaking it into a small number of recognizable sub-positions, rather than the full random setting of the same section. Instead, poor players have the same recall rate for both.
The problem with type B is that it depends on the program being able to decide which motion is good enough to be worth considering ('reasonable') in a particular position and this proves to be a much more difficult problem to solve than to speed up the type of Search with hardware superior and search extension techniques.
One of the few grandmasters of chess to devote himself seriously to computer chess is former World Chess Champ Mikhail Botvinnik, who wrote several works on the subject. He also holds a doctorate in electrical engineering. Working with relatively primitive hardware available in the Soviet Union in the early 1960s, Botvinnik had no choice but to investigate mobile electoral engineering software; at that time only the most powerful computer could reach more than a three-ply full-width search, and Botvinnik did not have such a machine. In 1965, Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess game (see Kotok-McCarthy).
One developmental milestone occurred when a team from Northwestern University, responsible for the Chess program series and winning the first three ACM Computer Chess Championships (1970-72), a type B search abandoned in 1973. The resulting program, Chess 4.0, won the Championship of the year it and his successors then became the second in the 1974 ACM Championship and Computer World Chess Championships that year, before winning the ACM Championship again in 1975, 1976 and 1977.
One of the reasons they give a shift is that they feel less stress during the competition, because it's hard to anticipate who drives their Type B program, and why. They also report that type A is much easier to detect for four months at their disposal and it turns out to be just as fast: at the time used to decide which steps are worth exploring, it's probably just to find them all.
In fact, Chess 4.0 establishes a paradigm that is still and basically followed by all modern chess programs today. The 4.0 type chess program wins for the simple reason that their program plays better chess. Such programs do not try to mimic the human thought process, but rely on comprehensive alpha-beta and negascout searches. Most of the programs (including all current modern programs) also include a fairly limited selective section of searches based on quiescence searches, and are usually extensions and pruning (especially the invalid step trimming of the 1990s and and so on) triggered under certain conditions in an attempt to get rid of or minimize the obvious bad movement (historical movement) or to investigate interesting nodes (eg examination extensions, pawns passed in the seventh rank, etc.). Trigger extensions and triggers should be used with extreme caution. Widespread and programs spend too much time looking at unattractive positions. If too much is trimmed, there is a risk of cutting an interesting node. Chess programs differ in terms of how and what types of pruning and extension rules are included as well as in the evaluation function. Some programs are believed to be more selective than others (eg Deep Blue is known to be less selective than most commercial programs because they are capable of performing complete full-width searches), but all have a full width-base search as a foundation and all have some selective components (Q-search, trimming/extension).
Although the addition means that the program does not actually check every node in the depth of its search (so it will not be a truly brute force in that sense), a rare error because this selective search is found to be worth the extra time saved because it can search more in. That way the Chess program can get the best of both worlds.
Furthermore, technological advancements by the order of magnitude in processing power have made the brute force approach much sharper than it did in the early years. The result is a very strong tactical AI player aided by some limited positional knowledge built by the evaluation function and trimming/extension rules starting to match the best players in the world. It turns out to produce excellent results, at least in the field of chess, to make computers do what is best (counting) rather than persuading them to mimic the processes and knowledge of the human mind. In 1997 Deep Blue beat World Champion Garry Kasparov, marking the first time a computer beat the world's rigid chess champion in standard time control.
The computer chess program considers the chess movement as a game tree. In theory, they examine all movements, then all movements back to that movement, then all move against them, and so on, where each individual movement by one player is called "ply". This evaluation continues until a certain maximum search depth or program determines that the final "leaf" position has been reached (eg, checkmate).
The naive application of this approach can only look for small depths in a practical amount of time, so various methods have been designed to speed up the search for good movement.
The AlphaZero program uses a Monte Carlo tree search variant without launch.
For more information see:
- Minimax algorithm
- Alpha-beta pruning â ⬠<â â¬
- Heuristic killer
- Iterative deepening depth-first search
- Null-move heuristic
- End Move Reduction
Leaf evaluation
For most chess positions, the computer can not see forward for all possible end positions. Instead, they should look forward several layers and compare possible positions, known as leaves. The algorithm that evaluates the leaves is called "evaluation function", and these algorithms are often very different between different chess programs.
The evaluation function usually evaluates positions in a hundred pawns (called centipawn), and considers material values ââalong with other factors affecting the strength of each side. When calculating the material for each side, the typical values ââfor the pieces are 1 point for the pawn, 3 points for a knight or bishop, 5 points for the castle, and 9 points for a queen. (See the relative value of the chess section.) Kings are sometimes given arbitrary high grades like 200 points (Shannon paper) or 1,000,000,000 points (USSR 1961 program) to ensure that the checklist exceeds all other factors (Levy & Newborn 1991: 45). By convention, positive evaluation supports White, and a negative evaluation supports Black.
In addition to the cutting points, most evaluation functions take into account many factors, such as pawn structures, the fact that a pair of bishops is usually more valuable, more centralized pieces of value, and so on. The king's protection is usually considered, as well as the game phase (opening, middle or end).
Endgame savings
Endgame games have long been one of the major drawbacks of the chess program, due to the depth of search required. Some other master level programs can not win in a position where even the average human player can force the win.
To solve this problem, the computer has been used to analyze some of the final chess game positions completely, starting with the king and the pawn against the king. Such endgame tabloases are created first using a retrograde analysis form, starting with the position where the final result is known (for example, where one side has been mated) and seeing what other positions are moving away from them, then that is one step of them, etc.. Ken Thompson is a pioneer in this field.
The results of computer analysis sometimes surprise people. In 1977, Thompson Thompson's chess machine used a tablebase endgame for kings and against kings and queens and was able to draw the theoretically missing ends against several masters (see Philidor # Queen versus rook position). This is despite not following the usual strategy of delaying defeat by keeping the defending king and close together for as long as possible. Asked to explain the reasoning behind some program steps, Thompson can not do it beyond saying the program database simply restores the best step.
Most grandmasters refused to play against computers in the queen versus endgame rook, but Walter Browne accepted the challenge. The position of queen versus castle is formed where the queen can win in thirty movements, with perfect game. Browne allowed 2ý hours to play fifty moves, otherwise the draw will be claimed under the rules of fifty steps. After forty-five steps, Browne agreed a draw, unable to force the match or win in the next five attacks. In the final position, Browne is still seventeen moves away from the checkmate, but not too far from winning the castle. Browne studied the endgame, and played the computer again a week later in a different position where the queen could win in thirty moves. This time, he captured the fort at the fiftieth step, giving him a winning position (Levy & Newborn 1991: 144-48), (Nunn 2002: 49).
Another position, long believed to be won, turned out to take more movement against the perfect game to actually win than allowed by the fifty-chess rules of chess. As a result, for several years the official FIDE rules of chess were changed to extend the number of allowed movements in this suffix. After a while, the rule returned to fifty movements in all positions - more such positions were found, complicating the rule even further, and it did not make a difference in human play, because they could not play the position perfectly.
Over the years, other endgame database formats have been released including Edward Tablebase, De Koning Database and Nalimov Tablebase used by many chess programs like Rybka, Shredder and Fritz. Tablebase for all positions with six parts is available. Several endgames of seven parts have been analyzed by Marc Bourzutschky and Yakov Konoval. Programmers using Lomonosov supercomputers in Moscow have completed a chess tablebase for all endgames with seven pieces or fewer (trivial endgame positions excluded, such as six white parts versus a single black king). In all these endgame databases it is assumed that castling is not possible anymore.
Many tablebases do not consider the fifty-step rule, in which games where fifty moves pass without capturing or moving pawns can be claimed as a draw by one of the players. This produces a tablebase that returns results like "Counterfeit pair in sixty-six steps" in some actual position will be withdrawn due to the fifty step rule. One reason is that if the chess rules have to be changed once again, giving more time to win such a position, there is no need to regenerate all the tablebases. It is also very easy to program using tablebase to pay attention and take into account this 'feature' and in any case if using tablebase endgame will choose the move that leads to the quickest victory (even if it will fall foul of the fifty rules -move with perfect play). If playing an opponent does not use tablebase, such an option will give you a good chance of winning in fifty moves.
Tables Nalimov, which uses state-of-the-art compression techniques, requires 7.05 GB of hard disk space for all five-part edges. To cover all six end pieces requires about 1.2 TB. It is estimated that a seven-piece tablebase requires between 50 and 200 TB of storage space.
The endgame database was prominent in 1999, when Kasparov played an exhibition game on the Internet against the rest of the world. A seven piece Queen and a pledge of endgame was reached with a World Team struggling to salvage a draw. Eugene Nalimov helped by producing tablebase that ended in six sections where both sides had two very well-used Queens to help the analysis by both parties.
Other optimizations
Many other optimizations that can be used to make the program play chess become stronger. For example, a transposition table is used to record a position that has been evaluated previously, to save a recalculation of them. Disclaimer tables record the main movements that "disprove" what appears to be a good move; this is usually tried first in different positions (because movements that reject one position are likely to disprove the other). Opening the book helps the computer program by providing general openings that are considered good games (and a good way to fight bad openings). Many chess machines use consideration to increase their strength.
Of course, faster hardware and additional processors can improve the ability of chess playing programs, and some systems (such as Deep Blue) use specialized chess hardware rather than just software. Another way to check more chess positions is to distribute position analysis to multiple computers. The ChessBrain project is a chess program that distributes search tree calculations over the Internet. In 2004 ChessBrain played chess using 2,070 computers.
Plays power versus computer speed
It is estimated that doubling the speed of the computer reaches about fifty to seventy points of Elo in playing power (Levy & Newborn 1991: 192).
Chess variant
Chess machines have been developed to play several chess variants such as Capablanca chess, but the engine is almost never integrated directly with specific hardware. Even for software that has been developed, most will not play chess beyond the size of a particular board, so games played on unlimited chess boards (unlimited chess) remain untouched by both computer chess as well as software.
Other chess software
There are several other forms of computer software related to chess, including the following:
- Chess game viewer allows players to view pre-recorded games on a computer. Most chess game programs can also be used for this purpose, but some special-purpose software exists.
- Chess instruction software is designed to teach chess.
- Chess database is a system that allows searching large library of historic games. Shane's Chess Information Database (Scid) is an example of a database that can be used under Microsoft Windows, UNIX, Linux and Mac OS X. There are also commercial databases, such as Chessbase and Chess Assistant for Windows and for Mac OS X.
- Software for handling chess problems
- Internet chess server and client
Leading theorist
Famous computer chess theorists include:
- Alexander Brudno
- Alexander Kronrod
- Georgy Adelson-Velsky
- Danny Kopec, professor of Masters and International Computer Science
- Mikhail Botvinnik, World Chess Champion three times
- D. F. Beal (Donald Francis Beal)
- David Levy
- Feng-hsiung Hsu, Father of Deep Blue (IBM & Carnegie Mellon University)
- Robert Hyatt, author of the open source Crafty chess program
- Hans Berliner
- Claude Elwood Shannon
Resolving chess
The overall prospect of chess breakdown is generally considered somewhat remote. It is widely assumed that there is no cheap computational method for breaking chess even in a very weak sense to determine with certainty the value of the initial position, and hence the idea of ââbreaking chess in a stronger sense to obtain a practical picture can be used. the strategy to play perfectly for both sides seems unrealistic these days. However, it has not been proven that there is no inexpensive computational way to determine the best movements in a chess position, or even traditional alpha-beta-searchers running on current computing hardware can not settle the initial position in an acceptable amount. time. The difficulty in proving the latter lies in the fact that, while the number of board positions that can occur in a very large game of chess (on the order of at least 10 43 to 10 47 ), it is difficult to rule out with the mathematical certainty of the possibility that the initial position allows one side to force the pair or repetition threefold after a relatively little move, in which case the search tree may only cover very little part of the set of possible positions. It has been proved mathematically that the common chess (chess played with a large number of random pieces on a large chessboard) is EXPTIME-complete, which means that determining the winning side in a random position of a common chess ball that can evidenced taking exponential time in worst case; However, these theoretical results provide no lower limits on the amount of work required to complete the usual 8x8 chess.
Minichess Gardner, playing on board 5ÃÆ' â ⬠"5 with approximately 10 18 possible board positions, has been solved; the game-theoretical value is 1/2 (ie a draw can be imposed by either party), and the coercion strategy for achieving that outcome has been described.
Progress has also been made on the other side: in 2012, all 7 and fewer parts (2 kings and up to 5 other pieces) endgames have been solved.
Chronology
Media Computer Chess History: AI Perspective - a complete lecture featuring Murray Campbell (IBM Deep Blue Project), Edward Feigenbaum, David Levy, John McCarthy, and Monty Newborn. at the Computer History Museum
Source of the article : Wikipedia