I've seen it repeated in many Internet fora over the past few years: lens corrections rob you of resolution, lens corrections rob you of pixels, lens corrections produce unsharp pixels. Many other variations on that seem to be repeated, as well.
Since I tend towards tl;dr (too long, didn't read) types of articles, let me give you the final comment up front:
Lens corrections...
- Can produce additional noise in the image corners.
- Can produce flaws in demosaic if encoded into raw data. (That's raw image data, not EXIF data.)
- Aren't often fully correcting.
That's about it for obvious image flaws you might see.
Why did camera makers resort to lens corrections? Any optical design is a balancing act of quite a few variables. Early SLR lenses often had obvious flaws—typically spherical aberration, chromatic aberration, and coma were the worst offenders—and for a long time the lens designers simply tried to engineer these flaws out using optics.
That's one of the things that led to aspherical lens elements, for instance.
But the rise of digital photography around the turn of the century, coupled with the rapidly advancing abilities of the electronics inside the cameras, led to a different approach: design for a few primary traits and then apply digital corrections to flaws that can be mathematically defined. The easier the mathematics involved, the more likely you'll see that correction as being “perfect".
Thus, vignetting correction was one of the first to be attempted. Light falloff in the image circle is often easily measurable. It's more difficult to compute with complex lenses and ones with aspherical elements, but it's still a relatively simple math application. Even more so because it doesn't have to be absolutely perfect to seem "right."
Unfortunately, this has led to lenses being designed with high degrees of vignetting. I've seen lenses now with over four stops drop in the corners recently, and two stops isn't at all uncommon now. The problem with that is this: basically you have a -2 to -4EV exposure reduction at the outer edge of the imaging circle. When you correct that, it's like bringing up the noise floor by two to four stops. Given the randomness of photons, an already dark area in a corner tends to produce visible noise gain when the vignetting correction is applied. So be careful with corners. Really dark areas in the corners can prove problematic with vignetting correction. I have a solution for that if I encounter it, but it involves layer masks in Photoshop, so not for the faint of heart.
A second common optical issue that has (usually) simple math that can be applied is linear distortion. Most lens designs produce either barrel or pin cushion distortion of some level, and that can also be corrected with some reasonably simple application of math. If the distortion is regular, the correction is usually quite good and essentially invisible. With some types of distortion things are more complicated. Many wide angle lenses with aspherical elements produce a wavy type of distortion, one of the sub-types of which is called mustache distortion (because it looks like a mustache with raised ends). This type of distortion tends to not be fully corrected. The "barrel" in the middle of the mustache is often corrected, but the wings of the mustache still have a bit of a wave to them (or vice versa).
Lateral chromatic aberration can generally be detected and corrected relatively easily, too. You don't even necessarily need to know the math behind what’s causing the spherical aberration to do so, as you can simply do what Photoshop used to do, which is apply specific color range corrections to edges. Longitudinal chromatic aberration is a different story, as it doesn't just impact edges, and is more difficult to correct.
Prime lenses have easier corrections than zoom lenses, by the way, as there are fewer data points you have to know and account for. Moreover, the EXIF data might not be precise in telling you the exact focal length of a zoom lens.
Which gets us to some of the pronouncements I see on the Internet. For instance, that corrections reduce resolution or produce unsharp pixels. Generally, no. In terms of resolution and angle of view, the lens designers are designing lenses to be corrected. For instance, the Nikkor 14-30mm f/4 is actually optically slightly wider than 14mm uncorrected. The lens corrections bring it back to being about what a real 14mm angle of view would be. Also note that most cameras are saving “border” pixels outside the actual image size so that they have the correct information and aren’t resizing data. Nikon Z cameras, for instance, record eight extra pixels on each boundary to support these corrections.
Which then leads to the claim that "because the pixels are being shifted, they are blurrier." I'm generally not seeing that: if the lens was sharp in the corners prior to distortion correction, you tend to still get sharpness in the corners after correction. Of course, if it wasn't sharp to start with, then it won't be sharp when the revised pixels are produced, and that can, and sometimes does, create some additional blurriness. But I'd even judge that to be minimal.
Close, but not precise, is basically what's happening with lens corrections.
Indeed, that's something I probably haven't been specific about enough in my reviews. I'll see a -3EV corner being corrected to -0.7EV in some vignette corrections, for instance. That's nearing my "ignorable" level of corner darkening, but it isn't a perfect corner.
It's probably best to think of "lens corrections" as actually being "lens correction approximations," both because the correction isn't complete, but also because it's fairly simplistically defined. Most people aren't going to notice a 0.5° distortion or a -0.5EV vignette, so "getting to zero" isn't something that any camera maker I know of is trying to always accomplish. They just want to get the results down to negligible. Unfortunately, what that means often varies from lens to lens within a camera maker, so I've seen Lens A corrected to 0.2° and Lens B corrected to 0.35° linearity. You won't notice the difference, but a difference remains that can be measured.
Sony users in particular need to be careful about lens corrections. Most camera makers only apply lens corrections to JPEG data, but only store lens correction tables/data in the EXIF data when you produce raw files. Sony, however, applies their lens corrections to the raw data. Moreover, these corrections are not subtle pixel-to-pixel shifts in the case of vignetting, but a finite number of correction circles. Thus vignetting becomes a series of basically round rings to which a fixed value is used for the entire ring. This introduces a compounding math problem: Sony's demosaic is different than third-party raw converters. Those third-party raw converters may be using different nearest neighbor calculations, but those neighbors at slice boundaries already have "corrections" in them that aren't accounted for by the converter. Thus, it's fairly common to see faint banding, rings, or other defects when a raw converter tries to render the already corrected Sony data.