我想為Chrome-No-Internet-Dino-Game創建一個AI。因此,我調整了這個Github-Repository以滿足我的需求。我使用以下公式來計算新的Q:資料來源:https://en.wikipedia.org/wiki/Q-learning我現在的問題是,即使在大約2.000.000次迭代之后,我的游戲分數也沒有增加。你可以在這里找到游戲文件:https://pastebin.com/XrwQ0suJQLearning.py:import pickleimport Game_headlessimport Gameimport numpy as npfrom collections import defaultdictrewardAlive = 1rewardKill = -10000alpha = 0.2 # Learningrategamma = 0.9 # DiscountQ = defaultdict(lambda: [0, 0, 0]) # 0 = Jump / 1 = Duck / 2 = Do NothingoldState = NoneoldAction = NonegameCounter = 0gameScores = []def paramsToState(params): cactus1X = round(params["cactus1X"] / 10) * 10 cactus2X = round(params["cactus2X"] / 10) * 10 cactus1Height = params["cactus1Height"] cactus2Height = params["cactus2Height"] pteraX = round(params["pteraX"] / 10) * 10 pteraY = params["pteraY"] playerY = round(params["playerY"] / 10) * 10 gamespeed = params["gamespeed"] return str(cactus1X) + "_" + str(cactus2X) + "_" + str(cactus1Height) + "_" + \ str(cactus2Height) + "_" + str(pteraX) + "_" + str(pteraY) + "_" + \ str(playerY) + "_" + str(gamespeed)def shouldEmulateKeyPress(params): # 0 = Jump / 1 = Duck / 2 = Do Nothing global oldState global oldAction state = paramsToState(params) oldState = state estReward = Q[state] action = estReward.index(max(estReward)) if oldAction is None: oldAction = action return action # Previous action was successful # -> Update Q prevReward = Q[oldState] prevReward[oldAction] = (1 - alpha) * prevReward[oldAction] + \ alpha * (rewardAlive + gamma * max(estReward)) Q[oldState] = prevReward oldAction = action return action在每一幀上,來自的函數調用 。然后,所述函數返回 0 表示 Jump,返回 1 表示 duck,返回 2 表示無。我嘗試調整常量,但這沒有顯示出任何效果。如果您有任何疑問,請隨時問我!提前感謝您!gameplay()Game_headless.pyshouldEmulateKeyPress()
2 回答

慕虎7371278
TA貢獻1802條經驗 獲得超4個贊
Reddit上的某個人做了這個,你有沒有看過他們的代碼?https://www.reddit.com/r/MachineLearning/comments/8iujuu/p_tfrex_ai_learns_to_play_google_chromes_dinosaur/
添加回答
舉報
0/150
提交
取消