Realization of React+Koa file upload

Realization of React+Koa file upload

background

When I was writing my design recently, it involved some file upload functions, including ordinary file upload, large file upload, resumable upload, etc.

Server-side dependency

  • koa (node.js framework)
  • koa-router (Koa routing)
  • koa-body (Koa body parsing middleware, which can be used to parse post request content)
  • koa-static-cache (Koa static resource middleware, used to process static resource requests)
  • koa-bodyparser (parse the content of request.body)

Cross-domain back-end configuration

app.use( async (ctx, next) => { ctx.set( 'Access-Control-Allow-Origin' , '*' ); ctx.set( 'Access-Control-Allow-Headers' , 'Content-Type, Content-Length, Authorization, Accept, X-Requested-With, yourHeaderFeild' , ); ctx.set( 'Access-Control-Allow-Methods' , 'PUT, POST, GET, DELETE, OPTIONS' ); if (ctx.method == 'OPTIONS' ) { ctx.body = 200 ; } else { await next(); } }); Copy code

Back-end configuration static resource access usage
koa-static-cache

//Static resource processing app.use( KoaStaticCache( './pulbic' , { prefix : '/public' , dynamic : true , gzip : true , }), ); Copy code

Back-end configuration requst body parse use
koa-bodyparser

const bodyParser = require ( 'koa-bodyparser' ); app.use(bodyParser()); Copy code

Front-end dependency

  • React
  • Antd
  • axios

Normal file upload

back end

The backend only needs to use

koa-body
Configure options, as middleware, pass in
router.post('url',middleware,callback)
Can

  • Backend code

    //Upload configuration const uploadOptions = { //Support file format multipart : true , formidable : { //Upload the directory here and upload it directly to the public folder, so you must remember to add/ uploadDir : path.join(__dirname, '../../pulbic/' ), //keep the file extension keepExtensions : true , }, }; router.post( '/upload' , new KoaBody(uploadOptions), ( ctx, next ) => { //Get uploaded files const file = ctx.request.files.file; const fileName = file.path.split( '/' ) [file.path.split ( '/' ) .length- . 1 ]; ctx.body = { code : 0 , data :{ url : `public/${fileName} ` }, message : 'success' } }); Copy code

front end

  What I use here is the formData transfer method, the front end passes

<input type='file'/>
To access the file chooser via
onChange event e.target.files[0]
You can get the selected file, and then create
FormData object
Files to be obtained
formData.append('file',targetFile)
Can

  • Front-end code
    const Upload = () => { const [url, setUrl] = useState<string>( '' ) const handleClickUpload = () => { const fileLoader = document .querySelector( '#btnFile' ) as HTMLInputElement; if (isNil( fileLoader)) { return ; } fileLoader.click(); } const handleUpload = async (e: any) => { //Get uploaded files const file = e.target.files[ 0 ]; const formData = new FormData() formData.append( 'file' , file); //upload file const {data} = await uploadSmallFile(formData); console .log(data.url); setUrl( ` ${baseURL} ${data.url} ` ); } return ( < div > < input type = "file" id = "btnFile" onChange = {handleUpload} style = {{ display: ' none ' }}/> < Button onClick = {handleClickUpload} > Upload small files </Button > < img src = {url}/> </div > ) } Copy code
  • Other alternative methods
    • input+form
      Set the aciton of the form as the backend page, enctype="multipart/form-data", type='post'
    • Use fileReader to read file data for upload
      Compatibility is not particularly good

Large file upload

  When the file is uploaded, the request may time out because the file is too large. At this time, you can take the method of fragmentation. Simply put, the file is divided into small pieces and sent to the server. These small pieces identify themselves Which position of which file belongs to, after all small blocks are transferred, the backend executes

merge
Merge these files into a complete file and complete the entire transfer process

front end

  • Obtaining the file is the same as before, so I won t repeat it
  • Set the default slice size, file slice, and the name of each slice
    filename.index.ext
    , Request recursively until the entire file is sent to request a merge
const handleUploadLarge = async (e: any) => { //Get uploaded files const file = e.target.files[ 0 ]; //For file fragmentation await uploadEveryChunk(file, 0 ); } const uploadEveryChunk = ( file: File, index: number, ) => { console .log(index); const chunkSize = 512 ; //Fragment width //[File name, file suffix] const [fname, fext] = file.name.split( '.' ); //Get the starting byte of the current slice const start = index * chunkSize; if (start> file.size) { //When the file size is exceeded, stop recursive upload return mergeLargeFile(file.name); } const blob = file.slice(start, start + chunkSize); //Name each slice const blobName = ` ${fname} . ${index} . ${fext} ` ; const blobFile = new File([blob] , blobName); const formData = new FormData(); formData.append( 'file' , blobFile); uploadLargeFile(formData).then( ( res ) => { //Recursively upload in pieces uploadEveryChunk(file, ++index); }); }; Copy code

back end

The backend needs to provide two interfaces

Upload

Store each uploaded block to the corresponding

name
Folder for easy merging later

const uploadStencilPreviewOptions = { multipart : true , formidable : { uploadDir : path.resolve(__dirname, '.. / .. /temp/' ), //File storage address keepExtensions : true , maxFieldsSize : 2 * 1024 * 1024 , }, }; router.post( '/upload_chunk' , new KoaBody(uploadStencilPreviewOptions), async (ctx) => { try { const file = ctx.request.files.file; //[name, index, ext] -split file name const fileNameArr = file.name.split( '.' ); const UPLOAD_DIR = path.resolve(__dirname, '.. / .. /temp' ); //The directory where slices are stored const chunkDir = ` ${UPLOAD_DIR}/${fileNameArr[ 0 ]} ` ; if (!fse.existsSync (chunkDir)) { //Create a directory without a directory //Create a temporary directory for large files await fse.mkdirs(chunkDir); } //Original file name.index-the specific address and name of each fragment const dPath = path.join(chunkDir, fileNameArr[ 1 ]); //Move the fragmented file from temp to the temporary directory where the large file is uploaded this time await fse.move(file.path, dPath, { overwrite : true }); ctx.body = { code : 0 , message : 'File upload successfully' , }; } catch (e) { ctx.body = { code : -1 , message : `File upload failed: ${e.toString()} ` , }; } }); Copy code

merge

  According to the merge request from the front-end, carry

name
Go to the folder where the large file is temporarily cached and find that it belongs to
name
After reading the chunks according to the index order, merge the files
fse.appendFileSync(path,data)
(Add and write in order and merge), and then delete the temporary storage folder to free up memory space

router.post( '/merge_chunk' , async (ctx) => { try { const {fileName} = ctx.request.body; const fname = fileName.split( '.' )[ 0 ]; const TEMP_DIR = path.resolve (__dirname, ' ../.. /temp' ); const static_preview_url = '/public/previews' ; const STORAGE_DIR = path.resolve(__dirname, `../.. ${static_preview_url} ` ); const chunkDir = path .join(TEMP_DIR, fname); const chunks = await fse.readdir(chunkDir); chunks .sort( ( a, b ) => a-b) .map( ( chunkPath ) => { //merge files fse.appendFileSync( path.join(STORAGE_DIR, fileName), fse.readFileSync( ` ${chunkDir}/${chunkPath} ` ), ); }); //Delete the temporary folder fse.removeSync(chunkDir); //URL of image access const url = `http://${ctx.request.header.host} ${static_preview_url}/${fileName} ` ; ctx.body = { code : 0 , data : {url }, message : 'success' , }; } catch (e) { ctx.body = { code : -1 , message : `Merge failed: ${e.toString()} ` }; } }); Copy code

http

  In the process of transferring large files, if the page refresh or temporary failure causes the transfer to fail, and the need to transfer from the beginning is very bad for the user experience. Therefore, it is necessary to mark the location where the transmission failed, and the next time the transmission can be performed directly here, I will

localStorage
Way of reading and writing

const handleUploadLarge = async (e: any) => { //Get uploaded files const file = e.target.files[ 0 ]; const record = JSON .parse( localStorage .getItem( 'uploadRecord' ) as any); if ( !isNil(record)) { //For the sake of display, we do not consider the collision problem first, and judge whether the files are the same and can use the hash file. //For large files, the hash (a file + file size) can be used Determine whether the two files are the same if (record.name === file.name){ return await uploadEveryChunk(file, record.index); } } //For file fragmentation await uploadEveryChunk(file, 0 ); } const uploadEveryChunk = ( file: File, index: number, ) => { const chunkSize = 512 ; //Fragment width //[File name, file suffix] const [fname, fext] = file.name.split( '.' ); //Get the starting word of the current fragment Section const start = index * chunkSize; if (start> file.size) { //When the file size is exceeded, stop recursive upload return mergeLargeFile(file.name).then( ()=> { //Delete the record after the merge is successful localStorage .removeItem( 'uploadRecord' ) }); } const blob = file.slice(start, start + chunkSize); //Name each slice const blobName = ` ${fname} . ${index} . ${fext} ` ; const blobFile = new File([blob] , blobName); const formData = new FormData(); formData.append( 'file' , blobFile); uploadLargeFile(formData).then( ( res ) => { //Record location after the return of each piece of successful transmission localStorage .setItem( 'uploadRecord' , JSON .stringify({ name :file.name, index :index + 1 })) //Recursively upload in fragments uploadEveryChunk(file, ++index); }); }; Copy code

File identical judgment

  It is possible to calculate the MD5 of the file, hash, etc. When the file is too large, hashing may take a lot of time. A piece of the file

chunk
Hash it with the size of the file, and perform a partial sampling comparison. Here is the pass
crypto-js
The library calculates md5, FileReader reads the file code

//Calculate md5 to see if there is already const sign = tempFile.slice( 0 , 512 ); const signFile = new File( [sign, (tempFile.size as unknown) as BlobPart], '' , ); const reader = new FileReader(); reader.onload = function ( event ) { const binary = event?.target?.result; const md5 = binary && CryptoJs.MD5(binary as string).toString(); const record = localStorage .getItem( 'upLoadMD5' ); if (isNil(md5)) { const file = blobToFile(blob, ` ${getRandomFileName()} .png` ); return uploadPreview(file, 0 , md5); } const file = blobToFile(blob, ` $(md5) .png` ); if (isNil(record)) { //directly upload the record md5 from the beginning return uploadPreview(file, 0 , md5); } const recordObj = JSON .parse(record); if (recordObj.md5 == md5) { //start uploading from the record position //resume uploading from the breakpoint return uploadPreview(file, recordObj.index, md5); } return uploadPreview(file, 0 , md5); }; reader.readAsBinaryString(signFile); Copy code

summary

  I haven t learned much about uploading files before. Through this function of Bishi, I have a preliminary understanding of the front-end and back-end codes of uploading files. Maybe these methods are just the options and not all of them. I hope to learn in the future Can continue to improve.
  It s the first time to write a blog in the Nuggets. After participating in the internship, I found that my knowledge is insufficient. I hope to organize my knowledge system and record my own learning process by insisting on blogging. I also hope that all the great gods will be here. Don t hesitate to enlighten me when you find a problem, thx

Even if no one applauds for you in the end, you have to gracefully call the curtain call and thank you for your hard work