At Peecho, we use many of the Amazon
AWS services. For example, we use EC2
for our virtual machines and S3
for all of our storage. Because of the scalable nature of S3,
theoretically, we could serve an infinite amount of users uploading
files to our platform without stressing our machines or infrastructure
at all. The only drawback is that connected apps will have to upload
their files directly to S3 - which can be challenging at times. That's
why I'm writing this blog.
First of all, the Spring RestTemplate class is awesome. It is a really neat and easy way to create requests to restful web services or even not so restful services. The cool thing is that you can configure marshallers on the template, which will automatically convert outgoing and incoming objects into XML, Json and more. For example, you can configure an XStreamMarshaller to marshall all outgoing objects into XML and all incoming XML into objects this way.
Uploading to S3 can be really easy if you use one of the many libraries that Amazon provides for the different platforms like Java, .Net, PHP etcetera. These libraries have easy-to-use methods to upload files to buckets, creating objects and setting policies. To make use of all this, you need an Amazon public and secret key, which is fine if you are uploading to your own S3 account. We need our customers to be able to upload to our S3 account and naturally we can't give our customers the secret key to our amazon account because they could do all kinds of nasty evil stuff with it.
Luckily, Amazon provides us with a way to upload files to S3 using a pre-signed url. This url contains a base64 encoded policy file, some paths to the data and a signed hash of the entire url - using your secret key. The policy files specify exactly what and where you can upload your data. For example, it specifies you can only upload *.jpg files to the /user-data/username/* path in S3. This file is generated on our server, using our secret key. This way customers of our API can only upload in directories that we specify and tampering with other customer's files is impossible. Doing browser based uploads using a pre-signed url is explained in this S3 article.
Now, we have a signed url to post to and a valid policy file - but we still need to actually upload the data. This is where the rest template comes in. S3 expects a multi-part form post, instead of a normal file upload. Luckily there is a MessageConverter in Spring to create multi-part form posts! Configure it in your application context like this:
<bean id="restTemplate" class="org.springframework.web.client.RestTemplate">
<bean class="org.springframework.http.converter.StringHttpMessageConverter" />
<bean class="org.springframework.http.converter.FormHttpMessageConverter" />
The FormHttpMessageConverter makes
it possible to create a multi-part form post. In your Java code you can
now create the request:
MultiValueMap<String, Object> form
= new LinkedMultiValueMap<String, Object>(); form.add("key", objectKey);form.add("Content-Type",
form.add("file", new FileSystemResource(file));
When just providing a map with only strings, the converter will convert
it into a normal form post. However, when adding a file to the map, the
converter automatically makes it a multi-part form post. The Filename
parameter of the form is set to an empty string, which means amazon S3
will use the filename of the uploaded file as the filename of the object
Well that is pretty much it - it can't get much easier, right? :-)