I'm Rene Richie welcome back to vector so great to have you here: iPhone 10 r has the same front-facing true depth, camera system as the iPhone 10, 10s and 10s max. That means it can do the same portrait selfies, including with portrait lighting new depth, control that lets you change the bouquet from F 1.4 to f-16 and the same an emoji, emoji and augmented reality effects as all of those higher priced phones and by the way Apple says it's heard. Our complaints know enough to wacky internet hashtag, smooth gate conspiracy theories, but the legit complaints I talked about in my previous video Apple for its part, really truly deeply believes. A new imaging pipeline is better than the previous one and better than what anyone else is doing today. If you disagree and when it comes to the selfie results, I personally disagree hard or soft, smooth or whatever it's important to. Let Apple know a lot, because pipelines can and will be tweaked updated and improved over time and, like I, said if they can detect and preserve fabric rope cloud and other textures.
Why not skin texture as well? The good news is that Apple says is identified a bug not just with faces, but with everything, and it will be fixing it in iOS 12.1 by making sure the system picks the sharpest frame possible to compute from and that way, we'll all be getting better. Crisper selfies and photos in general going forward. What's different is the camera on the back, where I found 10, 10s and 10s Mac's have dual systems with wide-angle and telephoto lenses and fuse them together for two times optical zoom and rear-facing portrait mode. iPhone 10 R has just the wide-angle it's the same: improved wide-angle as on the 10s and 10s Mac's, complete with an over 30% bigger sensor, with bigger and deeper pixels to drink in more light and preserve it more accurately, and it has the same new smart HDR feature that ties the image signal processor to the eight core neural Engine. Buffers up to four frames ahead: shoots a series of exposures, interleaves them with a series of over exposures to get details from the highlights and tops it all off.
With a long exposure to pull similar details from the shadows, you can turn all of that off in settings or keep both the smart HDR and the original version if you like, but it adds up to a very similar experience for a camera phone that costs only three-quarters as much where iPhone 10 are really differs from 10s and 10s max is the rear portrait mode absent a second telephoto camera to leverage for more real depth. Data Apple is doing with 10 R, something similar to what Google did with pixel two last year, using the parallax pulled off the phase, detect autofocus system or what Apple calls focus pixels to get some depth data and then applying a machine-learning powered segmentation mask to separate the subject. From the background now, as Apple was announcing this on stage, I was worried. I know a lot of people, love, love, Google's, portrait mode, but as someone who's owned a pixel to excel for a year now, I've had some issues with it. Some were minor like the slightly cooler color cast.
It looks like Google fixed with the pixel. Three others were bigger, like the cardboard cutout effect segmentation masks can produce and frankly, I see a little on the 10r as well. One was a deal-breaker, though, and that kept me from using the pixel for anything other than regular photography, not for portrait mode. The inability to show the effect live in the preview. I explained why, in my full review, but I'll quickly, repeat it again, the pixels' portrait mode isn't really portrait mode to me because it doesn't show the actual effect in the live preview.
It only applies it afterwards, a few long seconds afterwards as a post-production filter and that's something you could do with any app heck. It's something Google could release as an app for every other camera phone out there. Many would argue that none of that matters just the end result. I would say both at least for me, because I'm used to shooting, with the DSLR I'm used to framing for the actual shot I'm getting if I, don't like the depth of field in the preview, I can move a little and get it to look just the way. I want it before.
Taking the shot with pixel I had to take the shot, go check, it waits for it to apply, and then, if I didn't like it, go back and shoot it all over again, and all of that was a nasty surprise when I got the phone, because everything is terrible and almost no one covered it in their reviews, and it doesn't sound like pixel 3 solves this. What I think it comes down to is that Google seems to be using separate pipelines for the pixels live preview and actual camera capture, where Apple has gone to great engineering in silicon panes to make sure that, like a DSLR, what you see is what you shoot even more so because the iPhone display is so much better and more accurate than any DSLR. So, while I'll be getting one I'll likely be sticking to non portrait mode photos with it as well. Yell at me in the comments all you want: Google nerds, I heart you anyway, I just give far more towards optical nursery long tangent short I was worried. Apple would end up doing the same on iPhone 10 are but turns out.
Not so much whether it's the power of the 12 Bionic or just the result of different design trade-offs, Apple has managed to push the depth effect into the live view on iPhone 10 R as well. So what you see is what you shoot. Apple showed off the results on stage and in its demo picks. I know you can't always trust demo picks they tend to be cherry, picked idealized best cased best of the best case examples, but Apple has a good reputation here. They don't cheat and clean DSLR photos are shot with their phone or bring special lighting rigs around with them.
Everywhere they go things that an average customer just wouldn't have access to, and they also don't hire professional photographers to go on big publicity tours or make huge. Publicity buys with massive magazine media companies, including a bunch of cover shots. A lot of famous photographers do use, iPhones and plenty of magazines have shot covers and features on iPhones, but as far as I can tell Apple hasn't ever paid for carry or for placement. So what I also like is the way that Apple backs up the demo. Shots and quickly, with shot on iPhone shots, which is something Apple latched on to early people, started hashtagging their photos on Flickr and Instagram and Apple noticed, became enamored with them and quickly got behind them and started amplifying them, which was smart.
The best campaigns are often the ones your customers come up with, but in this case it's extra smart because you don't have to trust Apple's demos, you're literally flooded with other examples, almost immediately after launch. Second long tangent, short I thought I had an idea what the 10r could do with its new portrait mode, but no shooting with it for the last week has been one surprise after another, some good, some, not so good, but all of it, educational. The major difference is this: with all previous portraits mode and yeah, it's like surgeons generals, don't worry about it from iPhone 7, plus to iPhone, 10s and 10s max. You were shooting with the effective 52 millimeter telephoto lenses, with iPhone 10 R you're shooting, with the effective 26 millimeter wide angle lenses. Switching from one to the other is like swapping glass on a traditional camera.
That's especially true because, instead of just slapping on a custom, Gaussian or disk blur over the background and calling it a day, which is what Apple used to do, and I think pretty much everyone else still does this year, Apple examined a bunch of high-end cameras and lenses and engineered separate virtual models for both the iPhone 10s and the iPhone 10 R. That means, when you switch to portrait mode, it ingests the scene with computer vision, tries to make sense of everything at quote. Unquote sees and then renders the bouquet, including lights overlapping lights, and the kind of distortions real glass physics produces in the real world, and when you slide the new depth, control back and forth between F 1.4 and F 16, it recalculates and re-renders that virtual lens model. The result is the same kind of character and yeah personality you get with real world lenses, and that means shooting with iPhone 10s vs. iPhone 10 R gives your photos a different character and yeah personality as well.
There are also some huge pros and cons to get used to. For starters, the wide-angle lens is, of course, much wider. So if you want a face to fill, the frame you'll have to sneaker zoom in instead of out that you can move in and out. So much is terrific, though you're not bound by the same sweet spot that you are with the dual camera portrait mode system. That often seems to be telling you to move closer, move further away, move closer move further away, just take the shot already, and that means you can get a lot closer or a lot further away with the 10r.
Then you can with 10s and still trigger the depth effect and because the 10r is using the F 1.8 aperture wide-angle for portrait modes and not the F 2.4 telephoto like dual camera iPhones. It can pull in more light and compute its version of the depth effect in much darker conditions than the 10s or previous can, but only for human faces, which is where 10r might experience its own deal-breaker, at least for some people, unlike iPhone 7, plus, when it first shipped where portrait mode was optimized for human faces, but would do its best to sort out everything else and now, with iOS 12. Does an amazing job on an amazing array of different subjects and objects? iPhone 10 are literally will not engage portrait mode if it can't detect a human face. Now, like I, said in my review, it's pretty great at engaging when it does see a face. It uses face ID like neural networks to not only identify human faces, but identify them even if they're, partially obscured by glasses, scarves and other forms of clothing, Apple trained and tested it on an incredibly diverse and varied pool of people and things that people usually have on their faces and on their heads.
But that does mean no blurry photos of your foods or drinks, no pets or droids in depth, effects RER ? and it can even lose track of human faces if they turn too much past profile. Like I also said in my review, the F 1.8 camera has gotten good enough that you get a lot of real optical depth of field by picking your shots. But if you want the computational stuff- and you want it for everything, you'll have to move up to an iPhone, 10s or 10s Macs, or stick with an iPhone, 8, + or 7 +, to get it now, I've shot with SLR in DSLR all my life, but I've never considered myself a real photographer more of a hobbyist at best, so to get a better sense of the trade-offs. I asked a real photographer, a real professional photographer. What he thought: here's Tyler Stall man thanks for having me be a part of your video Renee.
My name is Tyler Stall man, and I've been working for a photographer for over a decade now, I think this is the first time that Apple actually undersold the cameras in their new phones with a whole new, larger sensor and lens and camera compartment. Everything about it has been performing a lot better in the iPhone 10s that I've been using so far, but the best news is that we can expect most of those same improvements to come to the iPhone 10 our. So first, let's talk about what is the same and is the most important stuff to me and that's everything on the wide-angle lens it'll allow you to have the same increased dynamic range where it's using smart HDR to take multiple exposures every single moment that the camera is on and combine them into the image quality that we just weren't, seeing in cell phone cameras before now in previous iPhones we're doing HDR as well, but they had to take the photo and then process it, and you'd see the results afterwards in the iPhone 10 s and 10 art, it is live processing all the time. That means that your live photos also have that effect, as well as all your video. Even on the selfie camera.
It's got that extra dynamic range, and it makes a huge difference, but the new camera isn't just what you see inside the little bump on the back of your phone. It's also things like the knurl engine, which has been improved in both the 10r and Menace. Furthermore, it's exactly the same as well as the image signal processor and those together are doing a lot of the intelligent processing that give you those incredible results out of camera. Now. The big difference is that we don't have the telephoto lens on the iPhone 10 R.
That means that you can't zoom in with quite as much detail and when you're taking portrait mode photos, there's a little less refinement around the edges of your subject. So those times that you see ears or noses or classes, get cut off in portrait mode. That might be a little worse on the 10r, because it doesn't have quite as much depth information to work with. However, you do get something for this different lens, since it's only using one lens, the wider lens you're able to take a wider portrait mode photo than you can on the 10s so strange that for so much lower a price the 10 are actually has one feature of the 10s. Doesn't anyway, thanks again for having me Rene I can't wait to talk to you more about the releases of 2018, thanks Tyler.
If you haven't already check out and subscribe to his channel, not only can you see his iPhone 10, our hands-on, but his ongoing experiences shooting with the 10s, as well as the fancy photography workstation he set up and put together with Jonathan Morrison I'll link, all of that below I'm going to say this again, because it bears repeating and again as good as DSLR and micro four-thirds have become, and I shoot the sit-down scenes and some b-roll for this very show. On Panasonic's and Canon cameras, we now live in an age of computational photography of bits that can go far beyond the atoms. Theoretically, these bits, those computational cameras, have no limits. They can reproduce the world in a way, no bound by physics. Glass ever could what you shoot could end up, looking more real than real scientific and sterile, or just uncanny and unnatural, by imposing some constraints and yet distortions of the real-world physics and lenses to computational models.
Not only does the wrong we've gotten used to look right again, the limits, add character and die of creativity and physical or computational. That's what you want from a great camera now, if you're interested in this brave but strange new world and aren't sure where to start, if you keep hearing terms like algorithms and neural networks and are interested to learn more check out brilliant, they have a bunch of courses teaching you the logic and theory behind all of this each course is interactive and breaks up, complicated concepts into bite-sized chunks to make sure you actually absorb the information. It's a strategy that I wish I really wish. My teachers had used back when I was in college because it might have caused me to stay in college longer, but check out brilliant org, vector and start learning today, thanks, brilliant and thanks to all of you for it during the show I've been shooting with apples, dual camera system for a few years now, and the new single camera system for just around a week. So obviously I want to shoot a lot more to get a better handle on it.
But I think one thing is already abundantly clear: if you don't absolutely have to have the dual camera system, then you can get an amazing single camera system and, frankly, industry-leading video that is better compared to standalone video cameras than other camera phones and save yourself, 250 or more dollars depending on size and configuration while doing it. But enough of what I think now I want to hear what you think are you dual cameras for life, or at least until three or four become an option, or do you find the single camera with its lower price point more than appealing may be compelling hit like hit subscribe and then hit up the comments, and let me know- and thank you so much for watching.
Source : Vector