1 回答

TA貢獻1777條經驗 獲得超10個贊
(移動評論來回答)
您正在嘗試處理文件對象而不是文件中的文本。創建文本文件后,重新打開它并在標記化之前讀取整個文件。
試試這個代碼:
import os
outfile = open('result.txt', 'w')
path = "C:/Users/okeke/Documents/Work flow/IT Text analytics Project/Extract/Dubuque_text-nlp"
files = os.listdir(path)
for file in files:
with open(path + "/" + file) as f:
outfile.write(f.read() + '\n')
#outfile.write(str(os.stat(path + "/" + file).st_size) + '\n')
outfile.close() # done writing
from nltk.tokenize import sent_tokenize, word_tokenize
with open('result.txt') as outfile: # open for read
alltext = outfile.read() # read entire file
print(alltext)
sent_tokens = sent_tokenize(alltext) # process file text. tokenize sentences
word_tokens = word_tokenize(alltext) # process file text. tokenize words
添加回答
舉報