2048 (3x3, 4x4, 5x5) AI

~39.6K - 79.3K
793 ratings
2048 (3x3, 4x4, 5x5) AI icon

ASO Keyword Dashboard

Tracking 97 keywords for 2048 (3x3, 4x4, 5x5) AI in Apple App Store

Developer: Jinyang Tang Category: games Rating: 4.62

2048 (3x3, 4x4, 5x5) AI tracks 97 keywords (no keywords rank yet; 97 need traction). Key metrics: opportunity 70.1, difficulty 38.5.

Tracked keywords

97

0  ranked •  97  not ranking yet

Top 10 coverage

Best rank — • Latest leader —

Avg opportunity

70.1

Top keyword: strategy

Avg difficulty

38.5

Lower scores indicate easier wins

Opportunity leaders

  • strategy

    Opportunity: 73.0 • Difficulty: 41.7 • Rank —

    Competitors: 236

    68.5
  • course

    Opportunity: 73.0 • Difficulty: 41.1 • Rank —

    Competitors: 71

    68.2
  • random

    Opportunity: 73.0 • Difficulty: 40.9 • Rank —

    Competitors: 144

    67.4
  • position

    Opportunity: 73.0 • Difficulty: 40.0 • Rank —

    Competitors: 50

    66.2
  • advantage

    Opportunity: 73.0 • Difficulty: 39.6 • Rank —

    Competitors: 117

    64.8

Unranked opportunities

  • strategy

    Opportunity: 73.0 • Difficulty: 41.7 • Competitors: 236

  • course

    Opportunity: 73.0 • Difficulty: 41.1 • Competitors: 71

  • random

    Opportunity: 73.0 • Difficulty: 40.9 • Competitors: 144

  • position

    Opportunity: 73.0 • Difficulty: 40.0 • Competitors: 50

  • advantage

    Opportunity: 73.0 • Difficulty: 39.6 • Competitors: 117

High competition keywords

  • like

    Total apps: 142,680 • Major competitors: 2,089

    Latest rank: — • Difficulty: 52.5

  • create

    Total apps: 118,650 • Major competitors: 1,432

    Latest rank: — • Difficulty: 51.6

  • best

    Total apps: 116,953 • Major competitors: 1,658

    Latest rank: — • Difficulty: 51.5

  • features

    Total apps: 115,626 • Major competitors: 1,316

    Latest rank: — • Difficulty: 51.4

  • based

    Total apps: 80,023 • Major competitors: 681

    Latest rank: — • Difficulty: 49.6

All tracked keywords

Includes opportunity, difficulty, rankings and competitor benchmarks

Major Competitors
best661005185

116,953 competing apps

Median installs: 150

Avg rating: 4.2

1,658

major competitor apps

puzzle701004575

29,343 competing apps

Median installs: 175

Avg rating: 4.3

552

major competitor apps

strategy731004269

12,444 competing apps

Median installs: 250

Avg rating: 4.2

236

major competitor apps

action721004371

17,905 competing apps

Median installs: 150

Avg rating: 4.2

308

major competitor apps

art711004473

23,826 competing apps

Median installs: 125

Avg rating: 4.2

253

major competitor apps

version701004575

29,505 competing apps

Median installs: 150

Avg rating: 4.0

226

major competitor apps

order681004980

60,070 competing apps

Median installs: 75

Avg rating: 4.3

523

major competitor apps

environment721004269

13,606 competing apps

Median installs: 75

Avg rating: 4.1

89

major competitor apps

classic701004574

25,527 competing apps

Median installs: 200

Avg rating: 4.3

532

major competitor apps

used681004879

50,800 competing apps

Median installs: 75

Avg rating: 4.0

303

major competitor apps

limited721004370

14,765 competing apps

Median installs: 125

Avg rating: 4.1

217

major competitor apps

created701004574

26,504 competing apps

Median installs: 125

Avg rating: 4.2

158

major competitor apps

multiple681004981

68,139 competing apps

Median installs: 125

Avg rating: 4.2

682

major competitor apps

including681004981

68,338 competing apps

Median installs: 150

Avg rating: 4.1

903

major competitor apps

score711004473

21,956 competing apps

Median installs: 100

Avg rating: 4.2

297

major competitor apps

various691004779

48,990 competing apps

Median installs: 100

Avg rating: 4.1

387

major competitor apps

tree711003557

2,707 competing apps

Median installs: 150

Avg rating: 4.2

27

major competitor apps

search681004980

56,712 competing apps

Median installs: 125

Avg rating: 4.1

630

major competitor apps

move701004574

26,889 competing apps

Median installs: 125

Avg rating: 4.1

326

major competitor apps

understanding721004269

14,163 competing apps

Median installs: 75

Avg rating: 4.3

49

major competitor apps

technique711003558

2,836 competing apps

Median installs: 100

Avg rating: 4.2

13

major competitor apps

run711004473

22,315 competing apps

Median installs: 150

Avg rating: 4.1

303

major competitor apps

create661005285

118,650 competing apps

Median installs: 125

Avg rating: 4.2

1,432

major competitor apps

good701004675

32,071 competing apps

Median installs: 125

Avg rating: 4.2

348

major competitor apps

ai691004778

44,935 competing apps

Median installs: 125

Avg rating: 4.3

390

major competitor apps

97 keywords
1 of 4

App Description

Classic 2048 puzzle game redefined by AI.

Our 2048 is one of its own kind in the market. We leverage multiple algorithms to create an AI for the classic 2048 puzzle game.

* Redefined by AI *
We created an AI that takes advantage of multiple state-of-the-art algorithms, including Monte Carlo Tree Search (MCTS) [a], Expectimax [b], Iterative Deepening Depth-First Search (IDDFS) [c] and Reinforcement Learning [d].

(a) Monte Carlo Tree Search (MCTS) is a heuristic search algorithm introduced in 2006 for computer Go, and has been used in other games like chess, and of course this 2048 game. Monte Carlo Tree Search Algorithm chooses the best possible move from the current state of game's tree (similar to IDDFS).

(b) Expectimax search is a variation of the minimax algorithm, with addition of "chance" nodes in the search tree. This technique is commonly used in games with undeterministic behavior, such as Minesweeper (random mine location), Pacman (random ghost move) and this 2048 game (random tile spawn position and its number value).

(c)Iterative Deepening depth-first search (IDDFS) is a search strategy in which a depth-limited version of DFS is run repeatedly with increasing depth limits. IDDFS is optimal like breadth-first search (BFS), but uses much less memory. This 2048 AI implementation assigns various heuristic scores (or penalties) on multiple features (e.g. empty cell count) to compute the optimal next move.

(d) Reinforcement learning is the training of ML models to yield an action (or decision) in an environment in order to maximize cumulative reward. This 2048 RL implementation has no hard-coded intelligence (i.e. no heuristic score based on human understanding of the game). There is no knowledge about what makes a good move, and the AI agent "figures it out" on its own as we train the model.

References:
[a] https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf
[b] http://www.jveness.info/publications/thesis.pdf
[c] https://cse.sc.edu/~MGV/csce580sp15/gradPres/korf_IDAStar_1985.pdf
[d] http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-8.pdf