Like or Loathe Midjourney, Photographers Currently Have an Edge With It

Everyone knows that AI is altering each trade and images is a great distance from being exempt from that. What many could not know, nevertheless, is photographers have an edge in relation to image-generating AI: Let me present you ways.

After I wrote my final article on AI, I used to be involved that folks have been experiencing burnout from AI information and debates, however the views stated in any other case. So, right here I’m once more. For those who’re jaded with the AI subject, save your self a while and click on away, however if you happen to’re not, let me flag to you one thing I have never seen spoken of wherever else.

I’ve been utilizing AI in varied capacities for a number of years, however the previous 18 months have given delivery to a very new breed of AI, extra highly effective than something we would seen earlier than by lightyears. For the previous 9 months or so, I’ve been experimenting with Midjourney, one of many premier image-generating AI software program much like OpenAI’s DALL-E (which I used earlier than Midjourney) and Steady Diffusion (which I’ve barely used, however is held in excessive regard.) For the uninitiated, let’s do a fast abstract.

What Is Midjourney?

Midjourney is a big language mannequin (LLM) that makes use of AI to create pictures from textual content. By describing the picture you wish to see, Midjourney can generate outcomes through the use of the big dataset of pictures it has. The textual content used to generate these pictures are known as “prompts,” and they are often as easy or as sophisticated as you select. Whereas early variations of Midjourney have been spectacular, the newest mannequin model, 5.2, permits the creation of pictures indistinguishable from images.

The Edge Photographers and Videographers Have

The very first thing to notice is that anyone can get a photo-realistic picture out of Midjourney, even with essentially the most primary prompts. You is likely to be stunned simply how sturdy the outcomes may be from prompts which can be a number of phrases. What many individuals misconstrue about that is that anybody can create something, however that is not essentially true. What makes prompt-driven AI troublesome is controlling the output. Sure, anyone might create a photo-realistic picture of an elephant, however to have full management over the setting, the colours, the depth of subject, the angle, the sunshine, and so forth, requires some know-how. Though it does not pertain to Midjourney (however relatively LLM reminiscent of ChatGPT and Bard), there’s a purpose why “Immediate Engineer” is essentially the most in-demand new job with over 7,000 roles listed from June 2022 to June 2023, in response to Workyard.

Now, having used many various AI software program, I really feel assured in saying that the talent ceiling for Midjourney is considerably decrease than the likes of ChatGPT. Nonetheless, most individuals don’t use text-to-image AI significantly effectively, simply typing primary prompts and hoping to get fortunate. You’ll be able to enhance this with varied parameters, however the place photographers and videographers have the benefit is utilizing our experience within the immediate.

Cameras and Lenses

Firstly, it has been confirmed that together with cameras in your prompts can have an effect on high quality. It is not recognized what number of pictures Midjourney has been educated on, however the normal consensus is that it is comfortably within the billions. Whenever you embrace a digicam in your immediate, it’ll doubtless discover pictures taken with that digicam (amongst many different pictures). In reality, some folks discovered that merely including H6D to the tip of a immediate might yield higher-quality outcomes. I think many doing this do not even know that it refers back to the $33,000 Hasselblad H6D medium format DSLR.

In my expertise, which trendy digicam you select does not have an effect on the ultimate outcome all that a lot by way of high quality, although the sensor measurement of the digicam does typically have an effect on depth of subject. For instance, the beneath pictures have been an identical prompts with the outcomes assorted solely by digicam; one was the Hasselblad H6D and one was the Fujifilm X100V. That’s, one is a medium format sensor and one is an APS-C sensor.

What’s necessary right here just isn’t that the lighting modified, or some components, and even the mannequin — they’re par for the course if you regenerate. What’s fascinating right here is the depth of subject. The background of the X100V picture is much nearer to focus than the medium format — that is correct, and as photographers, we perceive why this occurred. So, utilizing a mixture of aperture and the digicam, we are able to dictate the depth of subject.


As I discussed above, the aperture can be utilized to have an effect on the depth of subject of a picture, simply because it does in actual life. In order for you a slender depth of subject in your picture, you need Midjourney trawling quick apertures. Though it’s removed from a precise science — primarily as a result of Midjourney has no means of gauging the space of the topic from the digicam within the reference pictures — the outcomes will likely be in the best route no less than. Beneath are two prompts for a headshot on the road, one I included f/1.4 within the immediate, and the opposite I included f/11 within the immediate as a substitute.

You’ll be able to see from the folks on the left of the body how a lot the aperture impacts the picture, and you’ll see extra excessive examples that this. Bear in mind although, your phrases have an effect on the depth of subject too.


So, phrases — relatively expectedly — play an enormous position and sometimes overpower the settings you utilize in your immediate if they’re at odds with each other (for instance, a “cinematic headshot” at “f/18” is not going to provide you a headshot with every part in focus.) For those who sort “snapshot of a person on the road” your depth of subject will doubtless be wildly totally different to “editorial headshot of a person on the road.” Beneath are the outcomes for precisely these two prompts.

What’s extra, you do not have to make use of images phrases logically for them to work effectively. One instance can be “macro images” added to any immediate that has nothing to do with macro images. These two phrases will typically trigger your outcomes to have a slender depth of subject and a usually cinematic look. The beneath examples present simply how a lot the time period “macro images” can enhance the outcomes.


As each photographer and videographer is aware of, mild is the be-all and end-all of our crafts. So, put that to work in Midjourney too. The common individual does not know lighting kinds, however you possibly can management the lighting in Midjourney through the use of them. As with each tip, bear in mind Midjourney is not a simulator, and you may generally miss your goal, however with some experimentation, you possibly can management the output and look of generated pictures.

It wildly overcooked the eyes, however you possibly can see how impactful it may be if you dictate the lighting.

Miscellaneous Suggestions

Bear in mind, there may be quite a lot of what’s going to appear to be randomness within the outcomes, however actually, we simply do not know all of the interactions or Midjourney’s supply materials. Listed below are some photography-centric suggestions:

Midjourney can replicate movie shares fairly effectively, so use them for a sure aesthetic
“Tilt-shift” generally works, but it surely typically chooses a excessive viewpoint
“Coloration grading” tends to shoot for complementary and daring colours
“HDR” does precisely what HDR does more often than not
“Cinematic” typically ends in darker, low-key pictures
“8K” — a number of folks add this to the tip of prompts, but it surely causes the outcomes to look pretend and CGI in my expertise
Obscure images sorts reminiscent of “pinhole” or “infrared” typically work effectively
Uncommon lenses can work too in the event that they’re well-known sufficient, reminiscent of Lensbaby
“Low-key” and “high-key” do precisely what you’d hope
You’ll be able to dictate the angle of the shot with “low-angle” and even “drone”
Not together with one thing, reminiscent of lighting, does not imply “no lighting”, it means “Midjourney, decide the lighting based mostly on what I’ve stated on this immediate”

The Moral Elephant

I put some thought into whether or not I might add this part, however at this level, it is a broadly recognized performance of Midjourney and different AI picture turbines, so I’ll handle it. Nonetheless, I’ve determined to not embrace any instance pictures.

The “within the model of” part to prompts is arguably essentially the most highly effective affect on the look of the ultimate picture. This can be utilized ethically and to nice impact, as I’ve proven above, with the likes of “within the model of a Nationwide Geographic {photograph}” or “within the model of a Vogue cowl”, however by getting extra particular, you tread on ethically troublesome floor. For instance, you can add to your immediate, “within the model of Annie Leibowitz,” and it’ll get you nearer to her aesthetic. For those who mix this with different particulars — which I’m not going to offer — you will get to a picture that I am assured I might idiot folks into pondering is hers. These kinds of prompts make me uncomfortable, whether or not you are referencing a photographer, an artist, or a DoP. That is additionally one thread to a whole rope of copyright points surrounding AI picture technology.

Remaining Ideas

AI is a blended bag for photographers; it is highly effective, useful, and revolutionary, but it surely’s additionally scary, damaging, and legally uncharted. I resolved to apply utilizing AI of all kinds as a part of my talent set, and whereas that’s serving to me in some ways, it is also making me conscious of the place photographers are susceptible. That is one thing that’s spoken about recurrently, so I believed I would steadiness the scales slightly with a number of the benefits us ‘togs have with text-to-image turbines reminiscent of Midjourney.