sklearn...TfidfVectorizer僅當分析器返回對象列表時,在訓練后立即應用它才有效nltk.tree.Tree。這是一個謎,因為模型在應用之前總是從文件加載。與在該會話中進行訓練時相比,在自己的會話開始時加載和應用模型文件時,調試顯示模型文件沒有任何錯誤或不同。分析儀在這兩種情況下均適用并正常工作。下面是一個幫助重現這種神秘行為的腳本:import joblibimport numpy as npfrom nltk import Treefrom sklearn.feature_extraction.text import TfidfVectorizerdef lexicalized_production_analyzer(sentence_trees): productions_per_sentence = [tree.productions() for tree in sentence_trees] return np.concatenate(productions_per_sentence)def train(corpus): model = TfidfVectorizer(analyzer=lexicalized_production_analyzer) model.fit(corpus) joblib.dump(model, "model.joblib")def apply(corpus): model = joblib.load("model.joblib") result = model.transform(corpus) return result# exmaple datatrees = [Tree('ROOT', [Tree('FRAG', [Tree('S', [Tree('VP', [Tree('VBG', ['arkling']), Tree('NP', [Tree('NP', [Tree('NNS', ['dots'])]), Tree('VP', [Tree('VBG', ['nestling']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['grass'])])])])])])]), Tree(',', [',']), Tree('VP', [Tree('VBG', ['winking']), Tree('CC', ['and']), Tree('VP', [Tree('VBG', ['glimmering']), Tree('PP', [Tree('IN', ['like']), Tree('NP', [Tree('NNS', ['jewels'])])])])]), Tree('.', ['.'])])]), Tree('ROOT', [Tree('FRAG', [Tree('NP', [Tree('NP', [Tree('NNP', ['Rose']), Tree('NNS', ['petals'])]), Tree('NP', [Tree('NP', [Tree('ADVP', [Tree('RB', ['perhaps'])]), Tree(',', [',']), Tree('CC', ['or']), Tree('NP', [Tree('DT', ['some'])]), Tree('NML', [Tree('NN', ['kind'])])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('NN', ['confetti'])])])])]), Tree('.', ['.'])])])]corpus = [trees, trees, trees]首先訓練模型并保存model.joblib文件。train(corpus)result = apply(corpus)print("number of elements in results: " + str(result.getnnz()))print("shape of results: " + str(result.shape))我們打印結果數.getnnz()以表明該模型正在處理 120 個元素:number of elements in results: 120shape of results: (3, 40)但是該模型兩次都是從文件加載的,并且沒有全局變量(我知道),因此我們無法想到為什么它在一種情況下有效而在另一種情況下不起作用。
1 回答

GCT1015
TA貢獻1827條經驗 獲得超4個贊
Pythonhash
函數在運行之間是不確定的,這意味著該值在運行之間可能不一致。因此,哈希值被腌制,joblib
而不是按應有的方式重新計算。所以這看起來像是 中的一個錯誤nltk
。這會導致模型在重新加載時看不到產生式規則,因為散列不匹配,因此就好像產生式規則從未存儲在詞匯中一樣。
相當棘手!
在修復此特定問題之前nltk
,在運行訓練和測試腳本之前設置PYTHONHASHSEED
將強制哈希每次都相同。
PYTHONHASHSEED=0?python?script.py
添加回答
舉報
0/150
提交
取消