Uploading video content
Recently I was working on a project where users could share a video on a web application to a limited set of users. To make sure that videos can be played inside a browser using HTML5, these video will have to be converted.
The amazon AWS platform offers a nice set of services that I could use to build this application. To convert videos I make use of Amazon ES. Which is able to convert videos from one bucket to another bucket. Using signed s3 links I could also make sure that converted videos could only be retrieved by the users with access to this video.
Amazon s3 browser based uploading
However uploading large files to your own web server can be quite a hassle to get right. A lot of security and scalability issues may arise. Amazon s3 supports browser based uploading. This gives you the opportunity to let your user upload the files directly to your s3 bucket, without the need to upload the file to your own server first. This solution especially makes sense when you already are planning on saving your files on amazon s3, like I was.
Multipart uploading
Videos usually are large files however. And uploading large files at once has some difficulties, sometimes the upload might fail and you will have to upload the file again entirely. To prevent this you can make use file of chuncking, a large file is then split into smaller parts and uploaded separately. Once all parts are uploaded you combine them again on your web server to reconstruct the original file. Whenever your upload fails you can just re-upload a single part instead of the whole file. It also gives you the opportunity to pause and resume your uploads.
To merge the files on amazon s3 you can make use of the amazon multipart functionality.
Signing
You probably don’t want everyone to be able to upload an unlimited number of files to your bucket. Therefore every request that the user makes has to be accompanied by a signature. This signature should be provided on your own web server. After the web server made sure that the user should be able to upload files. It can generate the signature by hashing certain constraints on the request(like file name, allowed mime types and maximum file size) together with your aws key. When amazon receives the request it will do the same. When the signatures are equal it proofs that someone or something with access to your key has signed the request of the user.
PHP symfony + plupload
I will now show how you can implement this using PHP symfony and plupload. The latter is a javascript libary that will help us with the chunking and other aspects of file uploading on the browser side.
Once a user starts uploading a new file plupload will make a request to the web server indicating that it wants to upload a new file. After some checks the the server will respond with a filename and a corresponding signature for the first chunk. Depending on the clients file size plupload will divide the file in one or more chunks. For every succesive chunk a request will be made to the web server asking for a new signature for that specific chunk. At the same time the server will keep some administration to keep track of which files are signed by which user and how many chunks have been signed for.
In my case my web server will check if the user is correctly authenticated, since we want only authenticated users to upload videos. It also limits the number of sign requests a user can make in 24 hours to prevent the mallicious users from uploading huge amounts of files.
After all chunks have been uploaded the client will send a merge request to the server. The server will flag the upload as finished. Using a cronjob the chunks will then be merged with a multipart request and submited to the Amazon ES service for conversion. The multipart request can also be done directly on the users merge request. However I found that the multipart request can take quite some time for large files. To improve user experience I decided to work with a cronjob.
Installing dependencies
For the multipart request we will make use of the aws SDK, which you can install via composer.
composer require aws/aws-sdk-php
Configuring amazon users and s3 buckets
Next login to the amazon console and create a new bucket in my case I will call it my-video-upload-bucket
, you can of course also re-use an existing bucket but you have to be sure no file name collisions can occur.
Create upload user
I recommend creating a new user under the IAM panel that has the sole purpose of signing upload requests to your bucket. The user only needs programmatic access. In my case I called this user uploader
. Make sure that your user also has PutObject access to your bucket. Which can be done by attaching the following policy:
1 {
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Sid": "Stmt1470746880000",
6 "Effect": "Allow",
7 "Action": [
8 "s3:PutObject"
9 ],
10 "Resource": [
11 "arn:aws:s3:::my-video-upload-bucket/"
12 ]
13 }
14 ]
15 }
Make sure that you replace my-video-upload-bucket
with the name of your own bucket.
Lastly save your bucket name, user key id and secret access key in your parameters.yml file.
parameters:
s3_uploader_id: ...
s3_uploader_key: ...
s3_uploader_bucket: my-video-upload-bucket
SDK user for multipart request
The php sdk should be configured to make use of another user account. Please make sure that this user has permissions to perform the multipart
request on the s3 bucket.
To do the multipart we will make use of the Aws\S3\S3Client
class. In services.yml
I register it like this:
video_upload.s3_client:
class: Aws\S3\S3Client
arguments: [%aws_creds%]
factory: ['Aws\S3\S3Client','factory']
In my parameters.yml file the aws_creds variable looks like this:
parameters:
aws_creds:
profile: ***
region: eu-west-1
version: latest
But this might differ depending on the way you configure your php sdk. More information about configuring your sdk can be found here.
Administration
To limit the number of signs per user we create the following entity. It keeps track of the number signs, chunks and signs dates. It could also be used to clean up unused uploads.
1 <?php
2 namespace [YOUR_NAMESPACE]\Entity;
3
4 use Doctrine\ORM\Mapping as ORM;
5
6 /**
7 * Class Upload
8 * @ORM\Table(indexes={@ORM\Index(name="upload_fn", columns={"filename"} ),@ORM\Index(name="state", columns={"state"} )})
9 * @ORM\Entity(repositoryClass="[YOUR_NAMEPSACE]\Repository\UploadRepository")
10 */
11 class Upload
12 {
13 /**
14 * @ORM\Id
15 * @ORM\Column(type="integer")
16 * @ORM\GeneratedValue(strategy="AUTO")
17 * @var int
18 */
19 protected $id;
20
21 /**
22 * @ORM\Column(type="string", length=100)
23 * @var string
24 */
25 protected $filename;
26
27 /**
28 * @var User
29 * @ORM\ManyToOne(targetEntity="User")
30 */
31 protected $user;
32
33 /**
34 *
35 * @ORM\Column(type="integer")
36 * @var int
37 */
38 protected $timesSigned;
39
40 /**
41 *
42 * @ORM\Column(type="datetime")
43 * @var \DateTime
44 */
45 protected $lastSigned;
46
47 /**
48 * @ORM\Column(type="datetime", nullable=true)
49 * @var \DateTime
50 */
51 protected $doneTime;
52
53
54 /**
55 * @ORM\Column(type="integer")
56 * @var int
57 */
58 protected $chunks;
59
60
61 /**
62 * @ORM\Column(type="string")
63 * @var string
64 */
65 protected $state;
66
67
68 /**
69 * Upload constructor.
70 */
71 public function __construct()
72 {
73 $this->lastSigned = new \DateTime();
74 $this->timesSigned = 1;
75 $this->merged = false;
76 $this->chunks = 0;
77 $this->state = '';
78 }
79
80 /**
81 * @param User $user
82 * @param string $filename
83 * @return Upload
84 */
85 public static function createNew($user, $filename)
86 {
87 $x = new self();
88 $x->setUser($user);
89 $x->setFilename($filename);
90
91 return $x;
92 }
93
94
95 /**
96 * @return \DateTime
97 */
98 public function getDoneTime()
99 {
100 return $this->doneTime;
101 }
102
103 /**
104 * @param \DateTime $doneTime
105 *
106 * @return self
107 */
108 public function setDoneTime($doneTime)
109 {
110 $this->doneTime = $doneTime;
111 return $this;
112 }
113
114
115 /**
116 * @return int
117 */
118 public function getId()
119 {
120 return $this->id;
121 }
122
123 /**
124 * @param int $id
125 *
126 * @return self
127 */
128 public function setId($id)
129 {
130 $this->id = $id;
131 return $this;
132 }
133
134 /**
135 * @return string
136 */
137 public function getFilename()
138 {
139 return $this->filename;
140 }
141
142 /**
143 * @param string $filename
144 *
145 * @return self
146 */
147 public function setFilename($filename)
148 {
149 $this->filename = $filename;
150 return $this;
151 }
152
153 /**
154 * @return User
155 */
156 public function getUser()
157 {
158 return $this->user;
159 }
160
161 /**
162 * @param User $user
163 *
164 * @return self
165 */
166 public function setUser($user)
167 {
168 $this->user = $user;
169 return $this;
170 }
171
172 /**
173 * @return int
174 */
175 public function getTimesSigned()
176 {
177 return $this->timesSigned;
178 }
179
180 /**
181 * @param int $timesSigned
182 *
183 * @return self
184 */
185 public function setTimesSigned($timesSigned)
186 {
187 $this->timesSigned = $timesSigned;
188 return $this;
189 }
190
191 /**
192 * @return \DateTime
193 */
194 public function getLastSigned()
195 {
196 return $this->lastSigned;
197 }
198
199 /**
200 * @param \DateTime $lastSigned
201 *
202 * @return self
203 */
204 public function setLastSigned($lastSigned)
205 {
206 $this->lastSigned = $lastSigned;
207 return $this;
208 }
209
210 /**
211 * @return int
212 */
213 public function getChunks()
214 {
215 return $this->chunks;
216 }
217
218 /**
219 * @param int $chunks
220 *
221 * @return self
222 */
223 public function setChunks($chunks)
224 {
225 $this->chunks = $chunks;
226 return $this;
227 }
228
229 /**
230 * @return string
231 */
232 public function getState()
233 {
234 return $this->state;
235 }
236
237 /**
238 * @param string $state
239 *
240 * @return self
241 */
242 public function setState($state)
243 {
244 $this->state = $state;
245 return $this;
246 }
247 }
Our repository, responsible for retrieving, changing and saving the entities, will then look like this:
<?php
namespace [YOUR_NAMESPACE]\Repository;
use Doctrine\ORM\EntityRepository;
use [YOUR_NAMESPACE]\Entity\Upload;
use [YOUR_NAMESPACE]\Entity\User;
class UploadRepository extends EntityRepository
{
/**
* @param Upload $upload
*/
public function save($upload)
{
$em = $this->getEntityManager();
$em->persist($upload);
$em->flush();
}
/**
* @param string $fn
* @param User $user
* @return Upload|null
*/
public function findByFilename($fn, $user)
{
$em = $this->getEntityManager();
$q = $em->createQuery("SELECT u FROM SMKvvbBundle:Upload u WHERE u.filename = :fn AND u.user = :user");
$q->setParameter('fn', $fn);
$q->setParameter('user', $user);
return $q->getOneOrNullResult();
}
/**
* @param Upload $upload
*/
public function signNext($upload)
{
$em = $this->getEntityManager();
$q = $em->createQuery("UPDATE SMKvvbBundle:Upload u SET u.lastSigned = CURRENT_TIMESTAMP(), u.timesSigned = u.timesSigned + 1 WHERE u.id=:id");
$q->setParameter('id', $upload->getId());
$q->execute();
}
/**
* @param User $user
*
* @return int
*/
public function signsLast24h($user)
{
$d = new \DateTime();
$d->modify("-24 hours");
$em = $this->getEntityManager();
$q = $em->createQuery("SELECT sum(u.timesSigned) FROM SMKvvbBundle:Upload u WHERE u.user =:user AND u.lastSigned > :date");
$q->setParameter("user", $user);
$q->setParameter('date', $d);
return $q->getSingleScalarResult();
}
Controllers and signing proces
This is the controller that is responsible for signing the plupload ajax requests:
1 <?php
2
3 namespace [YOUR_NAMESPACE]\Controller;
4
5 use Aws\S3\S3Client;
6 use Monolog\Logger;
7 use [YOUR_NAMESPACE]\Entity\Upload;
8 use Symfony\Bundle\FrameworkBundle\Controller\Controller;
9 use Symfony\Component\HttpFoundation\Request;
10 use Symfony\Component\Security\Core\Exception\AccessDeniedException;
11 use Symfony\Component\HttpFoundation\JsonResponse;
12 use [YOUR_NAMESPACE]\Repository\UploadRepository;
13
14
15 class UploadController extends Controller
16 {
17 const MAX_SIGNS_PER_24H = 2500; //max signs per user per 24h
18 /**
19 * The new action should be called if the user wants to start uploading a new file
20 * It will provide a filename to which the user can upload the file
21 * @param string $filename
22 * @param int $chuncked
23 * @return JsonResponse
24 */
25 public function newAction($filename, $chunked, Request $request)
26 {
27 $this->forceCanSign(); //make sure user is not exceeding number of sings in last 24h
28
29 //get extension of original filename, so we can reuse it
30 $ext = pathinfo($filename, PATHINFO_EXTENSION);
31 if ($ext == '') {
32 $ext .= ".tmp";
33 }
34
35 //i prefer a uuid but this should be installed as seperate php extension
36 $uniqueKey = method_exists('uuid_create') ? uuid_create() : uniqid();
37
38 //create new random filename, we have to be sure it has not been used before
39 $s3key = strtolower($uniqueKey).".".$ext;
40
41 //create new upload entity, so we can track and limit user uploads
42 $up = Upload::createNew($this->getUser(),$s3key);
43 $this->getUploadRepository()->save($up);
44
45
46 //get full filename if chunked
47 if ($chunked > 0) {
48 $fn = $s3key.".0";
49 } else {
50 $fn = $s3key;
51 }
52
53 $p = $this->createPolicy($fn);
54 $s = $this->sign($p);
55
56
57 //It probably is a good idea to add some logging behaviour so you are able to check for security vulnerabilities
58 //And other possible malfunctioning
59 $msg = sprintf("Create new file %s with s3 file name %s and client filename %s", $chunked ? 'chuncked' : 'unchunked', $s3key, $filename);
60 $this->getLogger()->info($msg,['user' => $this->getUser()->getEmail(),'ip' => $request->getClientIp(), "agent" => $request->headers->get('User-Agent')]);
61
62
63 $data = ['filename' => $s3key, 'policy' => $p, 'signature' => $s];
64 return new JsonResponse($data);
65 }
66
67 /**
68 * Creates a signature for a chunk
69 * This function does not enforce the chunks to be signed in succesive order
70 * This makes that they can be signed in any given order
71 * @param string $filename
72 * @param int $chunk
73 * @param Request $request
74 * @return JsonResponse
75 */
76 public function chunkAction($filename, $chunk, Request $request)
77 {
78
79 $this->forceCanSign(); //make sure user is not exceeding number of sings in last 24h
80 $chunk = (int) $chunk;
81 $upload = $this->getUploadRepository()->findByFilename($filename, $this->getUser());
82 //if user is requesting to sign a chunk for a filename that was not assigned to that user, refuse to sign
83 if ($upload === null) {
84 throw $this->createNotFoundException();
85 }
86
87 //add to counter
88 $this->getUploadRepository()->signNext($upload);
89 $fn = $filename.".".$chunk;
90
91 $p = $this->createPolicy($fn);
92 $s = $this->sign($p);
93
94 $msg = sprintf("Signed new chunk %d with s3 file name %s", $chunk, $filename);
95 $this->getLogger()->info($msg,['user' => $this->getUser()->getEmail(),'ip' => $request->getClientIp(), "agent" => $request->headers->get('User-Agent')]);
96
97 $data = ['filename' => $fn, 'policy' => $p, 'signature' => $s];
98 return new JsonResponse($data);
99
100 }
101
102 /**
103 * Once all chunks are uploaded the following function should be called to make sure that file is merged
104 * @param string $filename
105 * @param int $chunks
106 * @param Request $request
107 * @return JsonResponse
108 */
109 public function mergeAction($filename, $chunks, Request $request)
110 {
111 $upload = $this->getUploadRepository()->findByFilename($filename, $this->getUser());
112
113 if ($upload === null) {
114 throw $this->createNotFoundException();
115 }
116 $chunks = (int) $chunks;
117 //if you want to merge the file via a cronjob you have to save the number of chunks
118 $upload->setChunks($chunks);
119 $upload->setDoneTime(new \DateTime("now"));
120
121 $this->getUploadRepository()->save($upload);
122
123 //merge chunks on aws s3
124 $this->merge($filename, $chunks);
125
126 $msg = sprintf("Succesfully finalized and merged file %s ", $filename);
127 $this->getLogger()->info($msg,['user' => $this->getUser()->getEmail(),'ip' => $request->getClientIp(), "agent" => $request->headers->get('User-Agent')]);
128 return new JsonResponse($data);
129 }
130
131 /**
132 * This function actually merges the chunks into one single file on amazon using the MutliPart upload request
133 * In my own implementation I actually did this via a cronjob. But for the purpose of this tutorial I will do it
134 * on the request
135 * @param string $filename
136 * @param int $chunks
137 */
138 protected function merge($filename, $chunks)
139 {
140 //Skip if file does not have to be merged
141 if ($chunks > 0) {
142 //We make use the aws SDK s3 client implementation
143 $client = $this->getS3Client();
144 $bucket = $this->getContainer()->getParameter('s3_uploader_bucket');
145
146 //Indicate that you want to merge a file into $filename
147 $response = $client->createMultipartUpload(['Bucket' => $bucket, 'Key' => $filename]);
148 $data = ['filename' => $filename, "UploadId" => $response['UploadId']];
149
150 $objects = [];
151
152 //assemble list of all parts to merge
153 for ($c =0; $c < $chunks; $c++) {
154 $objects[] = ['Key' => $filename.".".$c];
155 $client->uploadPartCopy([
156 'CopySource' => $bucket."/".$filename.".".$c,
157 'Bucket' => $bucket,
158 'Key' => $filename,
159 'UploadId' => $data['UploadId'],
160 'PartNumber' => $c + 1]);
161
162 }
163
164 //pass list to multi part upload command
165 $partsModel = $client->listParts(array(
166 'Bucket' => $bucket,
167 'Key' => $filename,
168 'UploadId' => $data['UploadId']
169 ));
170
171 //make sure to finalize
172 $model = $client->completeMultipartUpload(array(
173 'Bucket' => $bucket,
174 'Key' => $filename,
175 'UploadId' => $data['UploadId'],
176 'MultipartUpload' => [
177 'Parts' => $partsModel['Parts']
178 ]
179 ));
180
181 //delete the old chunks afterwards
182 $client->deleteObjects(
183 [
184 'Bucket' => $bucket,
185 'Delete' => [
186 'Objects' => $objects
187 ]
188 ]
189 );
190 }
191 }
192
193 /**
194 * Enforces that the number of signs for a given user does not exceed the maximum for the last 24 hours
195 * @throws AccessDeniedException
196 */
197 protected function forceCanSign()
198 {
199 if ($this->getUploadRepository()->signsLast24h($this->getUser()) > self::MAX_SIGNS_PER_24H) {
200 throw new AccessDeniedException();
201 }
202 }
203
204 /**
205 * This defined the constraints on a user request, this policy will also be passed with the
206 * upload request. Given this policy and your key amazon will then rebuild your signature. If the
207 * signatures matches it means that someone with access to your key has approved this
208 */
209 protected function createPolicy($fn)
210 {
211 $d = new \DateTime();
212 $d->modify('+1 days');
213 $policy = [
214 'expiration' => $d->format('Y-m-d\TG:i:s\Z'),//signature expires within 1 day
215 'conditions' => [
216 ['bucket' => $this->getParameter('s3_uploader_bucket')],
217 ['acl' => 'private'],
218 ['content-length-range', 0, 10485760],//max size to upload can be 10mb
219 ['key' => $fn],//the policy also enforces a specific key(amazon s3 language for file), only allowing the user to upload to this filename
220 ['success_action_status' => '200'],//after upload respone with 200 status code
221 [ "starts-with", '$Content-Type', "video/" ],//i only want to upload video files
222 [ "starts-with", '$name', "" ],
223 [ "starts-with", '$chunk', "" ],
224 [ "starts-with", '$chunks', "" ]
225 ]
226 ];
227 return base64_encode(json_encode($policy));
228 }
229
230 /**
231 * Signs actual policy (obtained by createPolicy)
232 */
233 protected function sign($policyStr)
234 {
235 return base64_encode(hash_hmac(
236 'sha1',
237 $policyStr,
238 $this->getParameter('s3_uploader_key'),
239 true
240 ));
241 }
242
243 //some methods that get services from the container
244 //so that my IDE can autocomplete
245
246 /**
247 * @return UploadRepository
248 */
249 protected function getUploadRepository()
250 {
251 return $this->getDoctrine()->getRepository('[YOUR_NAMESPACE]\Entity\Upload');
252 }
253
254 /**
255 * @return S3Client
256 */
257 protected function getS3Client()
258 {
259 return $this->get("video_upload.s3_client");
260 }
261
262 /**
263 * @return Logger
264 */
265 protected function getLogger()
266 {
267 return $this->get('monolog.logger.uploader');
268 }
269 }
Also make sure to register your routes
upload_sign_new:
pattern: /upload/sign/new/{filename}/{chunked}
defaults: {_controller: SMKvvbBundle:Upload:new}
methods: [POST]
upload_sign_chunk:
pattern: /upload/sign/chunk/{filename}/{chunk}
defaults: {_controller: SMKvvbBundle:Upload:chunk}
methods: [POST]
upload_merge:
pattern: /upload/merge/{filename}/{chunks}
defaults: {_controller: SMKvvbBundle:Upload:merge}
methods: [POST]
Dont forget to make sure that a user is logged in when calling one of these routes by adding to your security.yml
something like:
access_control:
- { path: ^/upload/, role: ROLE_USER}
Configuring plupload
Next we have to make sure plupload will perform the requests so that our uploads are signed. The javascript is obtained from Ben Nadel, but I made some small changes. This script assumes that your have included Jquery and plupload in your html page. You can download plupload here
1 var uploader = new plupload.Uploader({
2 runtimes : 'html5,flash',
3 browse_button : 'select-file', // you can pass in id...
4 container: document.getElementById('container'), // ... or DOM Element itself
5 url : $('#select-file').data('action'),
6 flash_swf_url : '../js/Moxie.swf',
7 filters : {
8 max_file_size : '1500mb',
9 mime_types: [
10 {title : "Video files", extensions : "mp4,mov,mpeg,mpg,avi,mkv,mts,3gp,m4v"},//allowed extensions
11 ]
12 },
13 urlstream_upload: true,
14 file_data_name: "file", //the name of the POST field that constains the file, s3 expects this to be 'file'
15 max_retries: 3,
16 multipart: true,
17 chunk_size:'10mb', //All chunks have to be >5mb for amazon s3 to accept, default plupload does not support this: hence files of 7mb are splitted in 5 and 2mb
18 multipart_params: {
19 "acl": "private",
20 "AWSAccessKeyId": $('#select-file').data('aws-access-id') ,
21 "Content-Type": "video/*",
22 "success_action_status": 200
23
24 }
25 });
26
27 //event handlers:
28 //once file is finished make sure to merge it
29 function hFileUploaded(up, file, object){
30 merge(file.s3Key, file.chunkIndex);
31 }
32 uploader.bind("FileUploaded", hFileUploaded);
33
34 function hUploadProgress(up, file) {
35 //track progress
36 }
37 uploader.bind('UploadProgress', hUploadProgress);
38
39 function hError(up, err) {
40 if(err.code == -601) {
41 // When file extension is not allowed
42 } else {
43 //other error
44 }
45 }
46 uploader.bind('Error',hError);
47
48 function hFilesAdded(up, files) {
49 plupload.each(files, function (file) {
50 uploader.start();
51 });
52 }
53 uploader.bind('FilesAdded', hFilesAdded);
54
55 //Will sign a new filename
56 function signNew(filename, chunked)
57 {
58 var strChunked = chunked ? "1" : "0";
59 var u = $('#select-file').data('sign-new');
60 u = u.replace("_filename_",filename);
61 u = u.replace("_chunked_", strChunked);
62
63 var data;
64 $.ajax({
65 url: u,
66 method:'POST',
67 success: function (result) {
68 data = result;
69 },
70 async: false //we have to sign before we can do the request
71 });
72 //data will contain file name and signature
73 return data;
74
75
76 }
77 //will sign a chunk of a file
78 function signChunk(filename, chunk)
79 {
80 var u = $('#select-file').data('sign-chunk');
81 u = u.replace("_filename_",filename);
82 u = u.replace("_chunk_", chunk);
83 var data;
84 $.ajax({
85 url: u,
86 method:'POST',
87 success: function (result) {
88 data = result;
89 },
90 async: false //we have to sign before we can do the request
91 });
92 return data;
93 }
94 //merges file
95 function merge(filename, chunks)
96 {
97 if (typeof chunks === 'undefined') {
98 chunks = 0;
99 }
100 var u = $('#select-file').data('merge');
101 u = u.replace("_filename_",filename);
102 u = u.replace("_chunks_", chunks);
103
104 var data;
105 $.ajax({
106 url: u,
107 method:'POST',
108 success: function (result) {
109 data = result;
110 },
111 async: true
112 });
113 return data;
114 }
115 //signs files that are < 5mb or the first chunk
116 function hBeforeUpload( uploader, file ) {
117 console.log( "File upload about to start.", file.name );
118 // Track the chunking status of the file (for the success handler). With
119 // Amazon S3, we can only chunk files if the leading chunks are at least
120 // 5MB in size.
121 file.isChunked = isFileSizeChunkableOnS3( file.size );
122
123 // we do our first signing, which determines the filename of this file
124 var signature = signNew(file.name, file.isChunked);
125
126 file.s3Key = signature.filename;
127
128 uploader.settings.multipart_params.signature = signature.signature;
129 uploader.settings.multipart_params.policy = signature.policy;
130 // This file can be chunked on S3 - at least 5MB in size.
131 if ( file.isChunked ) {
132 // Since this file is going to be chunked, we'll need to update the
133 // chunk index every time a chunk is uploaded. We'll start it at zero
134 // and then increment it on each successful chunk upload.
135 file.chunkIndex = 0;
136 // Create the chunk-based S3 resource by appending the chunk index.
137 file.chunkKey = ( file.s3Key + "." + file.chunkIndex );
138 // Define the chunk size - this is what tells Plupload that the file
139 // should be chunked. In this case, we are using 5MB because anything
140 // smaller will be rejected by S3 later when we try to combine them.
141 // --
142 // NOTE: Once the Plupload settings are defined, we can't just use the
143 // specialized size values - we actually have to pass in the parsed
144 // value (which is just the byte-size of the chunk).
145 uploader.settings.chunk_size = plupload.parseSize( "5mb" );
146 // Since we're chunking the file, Plupload will take care of the
147 // chunking. As such, delete any artifacts from our non-chunked
148 // uploads (see ELSE statement).
149 delete( uploader.settings.multipart_params.chunks );
150 delete( uploader.settings.multipart_params.chunk );
151 // Update the Key and Filename so that Amazon S3 will store the
152 // CHUNK resource at the correct location.
153 uploader.settings.multipart_params.key = file.chunkKey;
154 // This file CANNOT be chunked on S3 - it's not large enough for S3's
155 // multi-upload resource constraints
156 } else {
157 // Remove the chunk size from the settings - this is what tells
158 // Plupload that this file should NOT be chunked (ie, that it should
159 // be uploaded as a single POST).
160 uploader.settings.chunk_size = 0;
161 // That said, in order to keep with the generated S3 policy, we still
162 // need to have the chunk "keys" in the POST. As such, we'll append
163 // them as additional multi-part parameters.
164 uploader.settings.multipart_params.chunks = 0;
165 uploader.settings.multipart_params.chunk = 0;
166 // Update the Key and Filename so that Amazon S3 will store the
167 // base resource at the correct location.
168 uploader.settings.multipart_params.key = file.s3Key;
169 }
170 }
171
172
173 //sign each chunk that's not the first
174 function hChunkUploaded( uploader, file, info ) {
175 console.log( "Chunk uploaded.", info.offset, "of", info.total, "bytes." );
176 // As the chunks are uploaded, we need to change the target location of
177 // the next chunk on Amazon S3. As such, we'll pre-increment the chunk
178 file.chunkKey = ( file.s3Key + "." + ++file.chunkIndex );
179
180 //next sign next chunck
181 signature = signChunk(file.s3Key, file.chunkIndex );
182 uploader.settings.multipart_params.signature = signature.signature;
183 uploader.settings.multipart_params.policy = signature.policy;
184 delete( uploader.settings.multipart_params.chunks );
185 delete( uploader.settings.multipart_params.chunk );
186 // Update the Amazon S3 chunk keys. By changing them here, Plupload will
187 // automatically pick up the changes and apply them to the next chunk that
188 // it uploads.
189 uploader.settings.multipart_params.key = file.chunkKey;
190 }
191
192 // I determine if the given file size (in bytes) is large enough to allow
193 // for chunking on Amazon S3 (which requires each chunk by the last to be a
194 // minimum of 5MB in size).
195 function isFileSizeChunkableOnS3( fileSize ) {
196 var KB = 1024;
197 var MB = ( KB * 1024 );
198 var minSize = ( MB * 5 );
199 return( fileSize > minSize );
200 }
201 uploader.bind( "BeforeUpload", hBeforeUpload );
202 uploader.bind( "ChunkUploaded", hChunkUploaded );
203 uploader.init();
Viewing the upload form
In your view template you have to make sure that the paths that plupload will have to request are available . We also have to provide amazon access id and the url to our bucket, which will be used as the URL to post the final upload request to. We make use of the default symfony path() function to generate our sign and merge paths but put _chunk_
and _filename_
as our variables. These can then be easily replaced by our plupload javascript functions.
<a id="select-file"
data-action="https://{{ upload_bucket }}.s3.amazonaws.com/"
data-aws-access-id="{{ aws_access_id }}"
data-merge="{{ path('upload_merge',{'filename':"_filename_",'chunks':'_chunks_'}) }}"
data-sign-chunk="{{ path('upload_sign_chunk', {'filename':"_filename_", 'chunk': '_chunk_'}) }}"
data-sign-new="{{ path('upload_sign_new', {'filename':"_filename_", "chunked": "_chunked_"}) }}"
data-url-progress="{{ path('posts_upload_video_progress',{'id':'_id'})}}"
>Select file</a>
Make sure you pass the access_id and upload_bucket to your template, by doing something like this in your controller:
<?php
return $this->render('....', [
'upload_bucket' => $this->getParameter('s3_uploader_bucket'),
'aws_access_id' => $this->getParameter('s3_uploader_key'),
]);
?>
That’s it. I hope you found this tutorial helpful. If you have any questions our comments, please let me know below.