“To photograph” allows to capture an image of reality. A faithful image of what we see. Or thinks we see. Or thinks we saw.
Because “to photograph” is to take a snapshot of reality using technology, therefore objectively !
But no process (physical or chemical) sees what the eye sees, and even less what the brain remembers. In fact, the image captured by the sensor of a digital device(1) is never what the eye sees. Some differences are reduced as technology evolves, others are more structural:
The dynamic range of a device (ability to see details in both dark and light areas) is more restricted than that of the eye : that of the eye is estimated at around 20 EV(2) while the best current cameras do not exceed 15 EV.
The resolution or sharpness of the eye is estimated at 575 million pixels, which is 10 times greater than that of current sensors (around 50 millions pixels for the best current ones).
Sensitivity to light is lower for a camera because the eye uses cones for color and rods for brightness alone (no color), which it adapts depending on the brightness : in low light, it will switch to the rods and continue to see, with a progressively desaturated image (black & white).
The color range of a camera does not exactly correspond to that perceived by the eye: a sensor is sensitive to infrared, which the eye does not see. On the other hand, the eye is more sensitive to green, but this is emulated in a camera by putting twice as many green photosites.
The angle of vision and sharpness are linked together in an eye : it sees sharper in the center than at the edges (lateral vision), which allows it to have a wider angle of vision (focal equivalent of 22mm=, which is however blurry, and to see a sharper area, but only in the center (focal equivalent of 43mm).
Even what the brain remembers is different from what the eye saw : the images in our memories are more saturated than what reality was
This is why any image taken by a camera is always modified by the software of this same device (SLR, smartphone, etc.), which applies predefined corrections automatically or semi-automatically (selection of landscape, portrait, macro modes,...).
The other option is for the photographer to replace the camera image processing, by editing the raw photo from the camera and apply his/her own adjustments, using specialized software (branded or universal).
But in both cases, the image seen by the sensor must be edited to get closer(3) to what the eye (the brain) sees.
So, what is the most accurate photo ?
the one generated automatically,
the one influenced by the selection of modes or
the one edited manually ?
A faithful photo ? in a pig's eye...
(1) Or the chemistry of silver film
(2) Exposure Value
(3) Modifying the image generated by the device (typically, the well-known jpg) instead of the raw image, adds a correction after processing the device and especially after compressing the image (case of jpg)