Google Pixel 4a vs 3500$ PRO Canon Camera // Can a PHONE in 2020 take better photos than a DSLR? By Loïc Bellemare-Alford

By Loïc Bellemare-Alford
Aug 14, 2021
0 Comments
Google Pixel 4a vs 3500$ PRO Canon Camera // Can a PHONE in 2020 take better photos than a DSLR?

Good morning everyone! In my last two videos I praised the photos I got with the google pixel 4a and this got me to wonder how does the google pixel 4a compare to a good old dslr. In this video I'm going to be taking a bunch of test shots to compare a 350$ phone, that has computational photography, with a three thousand dollar camera. For those of you that are new to this channel I'm Loïc Bellemare-Alford, a young photographer and filmmaker on the journey to become better at this art! I recently published my review of the google pixel 4a with a focus on its photography and video performances. I also created a tutorial on how to capture stars using only your phone so if you're interested in these videos go check the links in the description down below. Also, if you enjoyed this video don't forget to hit the like button and subscribe if you want to learn more about the Pixel phone or photography and video in general. For these tests I'm going to be using my full frame camera the Canon 6d mark ii with a 16 to 35 f/2.8 lens and my Google Pixel 4a that's going to be mounted on top so we get similar pictures with both cameras. The pixel 4 has the same primary lens as the Pixel 3 or the Pixel 5 so if you have any of these phones, you're going to be getting the same results the compression works for these phones too.

For my first test shot, I want to see how the pictures look straight out of the camera. So I'm going to take a jpeg on each camera and I'm not going to edit them to have the same base to compare them so let's take the first shot here. My first test was kind of unfair for the dslr because it's a camera that's made to take raw pictures so, I'm going to go inside of lightroom and see if I can recover some of the highlights and some of the shadows. And for people that don't know what's the difference between jpeg and raw think of jpeg as being a fully baked cake, so you cannot change anything once the recipe is done, where a raw picture is when you still have all the ingredients and you didn't start mixing them so you can still tweak the picture and get something that is totally different if you want. The google pixel phones also have another trick under their sleeve and is that they use bracketing for every single shot and that means they take multiple exposures and then merge them together afterwards to create a high dynamic range picture.

That means they're going to have more detail inside of the shadows and inside of the highlights. You can do the same thing manually on a dslr so I'm going to go and set it up so we can see how both perform when you create high dynamic range pictures. The drone just crashed totally lost control and it went flying away and hit a tree. I see it right here I hope it's going to be okay! Let's talk about my favorite mode in the google camera which is portrait mode portrait mode creates a fake depth of field by using ai, so this is one place where I think my dslr is going to have an advantage because it has a bigger sensor which by default creates more depth of field and it doesn't need to use any trickery like ai to create the same types of shots. Now let's talk about zoom on both cameras.

So on a dslr you have to buy expensive lenses to be able to zoom inside of your shot where on the pixel camera you use a digital zoom that is enhanced with ai to get better results. let's see how both perform. Video isn't great on both cameras but I definitely think the dslr has an advantage here because it can use the optical lenses to create blurry backgrounds which creates way better images and more professional images. Where right now on phones it's just too demanding to use ai to create blurry backgrounds so it's not something that's really possible. For the depth of field, I'm going to give this one right away to the dslr.

It's getting quite dark right now and it's a great time to talk about the last mode I want to compare which is night sight. Night sight takes multiple pictures stitches them together and uses ai to make sure that everything is still visible and sharp. I think this mode is actually going to be doing better on my phone than on the dslr where it becomes a little bit harder to take pictures right now because I have to have use a little bit longer exposure and it's going to move and it's going to create blur inside of this shot. Now let's start comparing some of the results I got. If you're planning to buy a dslr without doing any post processing don't do it! If you look at the picture here we got straight out of the camera, as a jpeg, there's a lot less details inside of the highlights here in the clouds and in the shadow in the bottom here.

But if we take a raw picture and we bring it inside of lightroom, and do a quick edit, we actually can see here that there's a lot more of highlights that were recovered and also a lot more details in the shadows. Here when comparing the hdr shots, I noticed a few interesting things. First of all, on both cameras the results are really good and there's a lot of details inside of the highlights and the shadows. If you look at the two pictures here we can see the results are very similar but one interesting thing is if you take a single exposure raw picture on the dslr, there's actually enough information in the raw file to recover the highlights and the shadow. This shows how much information is kept inside of a raw file.

By default, another interesting case here, where I think the dslr didn't perform as well, is the single exposure where it completely overblown the highlights. Normally if you take a hdr shot, you should be able to recover it. But even here, there's some of the highlights in the clouds that are just completely overblown and we cannot do anything with them, where the Pixel was really intelligent here and it underexposed the whole shot to be sure to preserve the details in the whole picture. Now before looking at the pictures I got with portrait mode, I want to go a little bit more depth about what is depth of field because I think most people don't know why smartphones have to use ai to fake it. light hitting a subject, let's say a person, gets diffused.

A lens then captures that light and redirects it to its sensor. The point of where the light is being diffused, or where the two lines crosses in my example, is called the focal point. It's the point where the image is perfectly sharp or what we call in focus. Now if the subject gets closer to the lens, the focal point gets closer and the plane of focus gets smaller. The plane of focus is the area around the focal point where the image is still reasonably in focus.

The area out of the plane of focus creates a depth of field, which is the blur we see around. All of this will say phone cameras have tiny lenses and sensors, and this means that the plane of focus is bigger, and even if the subject gets closer it doesn't create much blur in the background. This is why phones use all kinds of crazy techniques to fake the depth of field. Now let's look at a few examples. If we see the first picture here and we zoom in, we can see that my hair on both pictures are perfectly detailed.

We have all my little hairs but this is a pretty easy background, so that's what we would expect. One thing to notice here, is that the focus on this picture is actually on my nose and my eyes are already starting to be blurry, where in this picture everything is perfectly sharp because of how the depth of field works on the both cameras. Now if we move to the second picture, this is a harder shot, where, here on the dslr, it gets my hairs perfectly in detail all around here, because of the optics, but here the AI is really having a hard time to determine what is my hair and what is the background because it's a little bit more messier background. If we move to the next shot of my dog, we can see that here again, because the background is more or less brown, and it's the same case for the hairs of my dog, it's having a hard time here too to determine what is the background and what is his hair. One thing that's pretty interesting to notice, is that you can go on your smartphone and you can actually reduce the intensity of the blur.

This allows to create something that's a little bit more natural for the background if you still want to have blur using your smartphone and you still want a natural result. If you look at a few examples here where I took some leaves, we're going to see they're all really good results. So here both are really good results. If we move to the second shot, here again they're both really good and really well detailed and sharp on the sides of the leaves that should be in focus. Now here is where we start to have a difference.

We can see they have been really intense on blurring the background, probably a little bit too much because when we look at this shot here, we see that there's different layers and the more it goes in the background the more it becomes blurrier. So this is definitely a more natural result. If we look at the next one, here again these are both really good and this one worked a little bit better so we can see that when it goes in the background it did blur out some of the leaves in the background here. So this did work quite well for this shot. Now there's some weird cases where the AI has a hard time also.

Like for example, here on this branch, where it just disappears because of the ai because it doesn't really know which parts are in the foreground or not. It's having a really hard time separating them. If we look at another example that's even harder, here we have a really big mess inside of the ai that just doesn't know what's the foreground what's the background and how to blur every part of the shot. Where here the dslr just gets it perfectly right. Now a pretty special example here, is if we look at the detail inside of this flower here we can see the focus was done on the front part.

I missed it because you want it to be in the center here and this is a problem when you have a dslr with a shallow depth of field where if you don't get your focus perfectly right, the rest of the picture is out of focus, where here on the my cell phone it did a perfect job of getting the whole flower in focus, and adding the background blur afterwards. Now if you look at the pictures when zooming, you can see at one point three times they produce similar shots. But if we zoom at 2.6x, we can see that here the difference of having a dedicated lens starts to show up. There's a lot more detail inside of that shot than this one here and when we move to a 7x zoom or 190mm lens, we can see here that it's totally different. So we have all the detail that is preserved on the dslr shot, but we don't have much detail left inside of the zoom on the Pixel phone.

In low light condition, this is one place where I think my phone outperforms my dslr for handheld shots. If you look at this picture here, there was almost no light left. We still have some details in the highlights and the shadows and where my dslr shot I couldn't even find a shutter speed that was fast enough to avoid having motion blur. Now if we put both cameras on tripods, they become a lot better. If you look at some of the astrophotography mode shots that I got from the Pixel 4a, they're really, really good where we get a lot of details in the stars but also inside of the foregrounds here that are a lot darker.

And we can even get the milky way using the smartphone which is really impressive. But now if we compare two shots I got the same night with the dslr and the Pixel, we can see the DSLR here really outperforms the pixel just because of the bigger sensor can capture so much more details inside of a low light condition when it's steady on a tripod here are my final thoughts are smartphone cameras going to replace dslr menu or less anytime soon no but i think that for most people the smartphone camera is good enough and with the computational photography you can get stunning results without having to think about too much when taking the shot if you want to step up your photographic game a little bit i would go inside of my settings in the camera app and enable raw capture so you can play a little bit more with the picture afterward and tweak it to your liking dslrs and mirrorless are still going to produce better results especially in more complex cases like long exposures but they're also a lot harder to use because you need to think about all the settings when creating your shot and also process every single shot in a dedicated software like lightroom afterwards so it's a lot longer to create i also really take hdrs on my camera because they're a pain to create because it's so long of a process to create i have a full video on hdrs using your dslr so if you're interested in that you can go check out the link in the description down below i hope you enjoyed this video as much i had fun creating it and i just wanted to create a fun video to show how far these cameras have become inside of our smartphones if you enjoyed the video please leave a like below and also subscribe if you want to learn more about the pixel 4a or photography and filmmaking in general see you in the next one bye.


Source : Loïc Bellemare-Alford

Phones In This Article








Related Articles

Comments are disabled

Our Newsletter

Phasellus eleifend sapien felis, at sollicitudin arcu semper mattis. Mauris quis mi quis ipsum tristique lobortis. Nulla vitae est blandit rutrum.
Menu