分片上传大文件Demo
为了实现分片上传,包括断点续传和重试机制,我们可以使用Vue.js作为前端,Spring Boot作为后端。这个方案包括以下步骤:
-
前端:
- 使用Vue.js进行文件分片上传。
- 管理分片上传的进度和状态,处理断点续传和重试。
-
后端:
- 使用Spring Boot处理分片上传的请求。
- 存储上传的分片并重组成完整的文件。
前端(Vue.js)
首先,安装所需的依赖项,例如axios(用于发送HTTP请求)。
npm install axios
然后,编写一个Vue组件来处理分片上传:
<template>
<div>
<input type="file" @change="handleFileChange" />
<button @click="uploadFile" :disabled="!file">Upload</button>
<div v-if="uploadProgress >= 0">Upload Progress: {{ uploadProgress }}%</div>
</div>
</template>
<script>
import axios from 'axios';
export default {
data() {
return {
file: null,
chunkSize: 2 * 1024 * 1024, // 2MB
uploadProgress: -1
};
},
methods: {
handleFileChange(event) {
this.file = event.target.files[0];
},
async uploadFile() {
if (!this.file) return;
const totalChunks = Math.ceil(this.file.size / this.chunkSize);
let uploadedChunks = 0;
for (let chunkIndex = 0; chunkIndex < totalChunks; chunkIndex++) {
const start = chunkIndex * this.chunkSize;
const end = Math.min(this.file.size, start + this.chunkSize);
const chunk = this.file.slice(start, end);
const formData = new FormData();
formData.append('file', chunk);
formData.append('fileName', this.file.name);
formData.append('chunkIndex', chunkIndex);
formData.append('totalChunks', totalChunks);
await this.uploadChunk(formData, chunkIndex);
uploadedChunks++;
this.uploadProgress = Math.floor((uploadedChunks / totalChunks) * 100);
}
alert('File upload completed!');
},
async uploadChunk(formData, chunkIndex) {
try {
await axios.post('/upload', formData);
} catch (error) {
console.error(`Failed to upload chunk ${chunkIndex}`, error);
// Retry logic here
await this.uploadChunk(formData, chunkIndex);
}
}
}
};
</script>
后端(Spring Boot)
创建一个Spring Boot应用程序,并编写一个Controller来处理文件上传请求:
pom.xml
首先,确保你有以下依赖:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
FileUploadController.java
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.concurrent.ConcurrentHashMap;
@RestController
public class FileUploadController {
private static final String UPLOAD_DIR = "uploads/";
private ConcurrentHashMap<String, Integer> uploadedChunks = new ConcurrentHashMap<>();
@PostMapping("/upload")
public void uploadFile(@RequestParam("file") MultipartFile file,
@RequestParam("fileName") String fileName,
@RequestParam("chunkIndex") int chunkIndex,
@RequestParam("totalChunks") int totalChunks) throws IOException {
File uploadDir = new File(UPLOAD_DIR);
if (!uploadDir.exists()) {
uploadDir.mkdirs();
}
File tempFile = new File(UPLOAD_DIR + fileName + ".part" + chunkIndex);
try (FileOutputStream fos = new FileOutputStream(tempFile)) {
fos.write(file.getBytes());
}
// Track uploaded chunks
uploadedChunks.put(fileName + chunkIndex, chunkIndex);
// If all chunks are uploaded, combine them
if (uploadedChunks.size() == totalChunks) {
combineChunks(fileName, totalChunks);
uploadedChunks.clear();
}
}
private void combineChunks(String fileName, int totalChunks) throws IOException {
File finalFile = new File(UPLOAD_DIR + fileName);
try (FileOutputStream fos = new FileOutputStream(finalFile, true)) {
for (int i = 0; i < totalChunks; i++) {
File partFile = new File(UPLOAD_DIR + fileName + ".part" + i);
fos.write(java.nio.file.Files.readAllBytes(partFile.toPath()));
partFile.delete();
}
}
}
}
总结
这段代码实现了一个分片上传文件的系统,包括断点续传和重试机制。前端使用Vue.js来处理文件分片并上传,后端使用Spring Boot来接收并重组文件。你可以根据需要进一步扩展和优化这些代码,例如添加更多的错误处理和日志记录。
标签:vue,java,File,formData,fileName,file,totalChunks,chunkIndex,上传 From: https://blog.csdn.net/weixin_45408984/article/details/139506649