我有以下file.txt參考這篇文章。它可能包含多達 400,000 個條目,我需要將它們插入到數據庫中,但是對于大約 80000 到最大 100,000 的文件,該過程成功但如果超出則失敗。問題可能出在寫入數據庫的最后一個循環中,FILETYPE: STUDENTFORMATVERSION: 1.2 ACTION: REPLACEALLCOLUMNS: NAME,SURNAME,AGE,SEX"raj","jonson",17,"M""Rita","Sweety",17,"F"這是我的示例代碼//入口點public RestResponse provisionStudent(MultipartFormDataInput input) { long processed = 0L; HashMap<String, String> columnMaping = new HashMap<String, String>(); //mapping here as the column name is not same as in the database columnMaping.put("NAME","name"); columnMaping.put("surname","s_name"); columnMaping.put("AGE","age"); columnMaping.put("SEX","mf"); try { String lineRead; List<Map<String, String>> listheader = new ArrayList<>(); Map<String, String> obj = new LinkedHashMap<>(); ArrayList<String> body = new ArrayList<String>(); InputStream result; //checking if file contain data if (input == null || input.getParts() == null || input.getParts().isEmpty()) { throw new IllegalArgumentException("Multipart request is empty"); } if (input.getParts().size() == 1) { InputPart filePart = input.getParts().iterator().next(); result = filePart.getBody(InputStream.class, null); } else { result = input.getFormDataPart("file", InputStream.class, null); } BufferedReader httpResponseReader = new BufferedReader(new InputStreamReader(result)); // reading the header part into obj and the body part in array list body while ((lineRead = httpResponseReader.readLine()) != null) { if (lineRead.contains(":")) { String[] headervalues = lineRead.replace(" ", "").split(":"); obj.put(headervalues[0], headervalues[1]); } else { if (lineRead.length() > 0) body.add(lineRead.replace(" ", "")); } }
將 80000 左右的大數據插入到 postgres 數據庫中,Java 失敗
慕工程0101907
2021-08-19 22:27:49