• Please take a moment and update your account profile. If you have an updated account profile with basic information on why you are on Air Warriors it will help other people respond to your posts. How do you update your profile you ask?

    Go here:

    Edit Account Details and Profile

Are you using AI - and how?

FLGUY

“Technique only”
pilot
Contributor
Why I use it for OPRs (the AF version of FITREPs) is because we have to have blocks filled to a character limit +-3.

So I feed AI my narrative and tell it to say the same thing but in the max characters for the box including spaces.

In the AF if you don't fill the box exactly, you're wrong.
This is the most USAF thing I’ve read in a long time.
 

MIDNJAC

is clara ship
pilot
I think a lot of people start out saying "I don't want or need this.." - but smart people at places like Apple - technology elites - anticipate what you don't even know you want - then create it for your consumption. Its magic. And these things lead to a truly better and more meaningful life.

With all due respect Chuck, I don't need anyone at Apple/Meta/etc to inform me or attempt to force me to be interested in something that I am not. Personally, I don't view them as being particularly smarter than anyone else, just very financially motivated to shove this down our throats with no regard for the long term impacts. When I first experienced a text message in college, I thought it was the dumbest thing I had ever seen, and was astounded that people wanted to use it. So I will fully admit that I have almost zero interest in "tech" or anything in this realm. I don't think anything on my smart phone makes my life better in any way. Possibly more convenient at times, but not better. I guess I just have a much more pessimistic outlook on where this will go. I think there are some limited areas in which AI could be beneficial for humanity (health care/screening being one of the only examples), but for the most part, I see it as only bad. And by that, I mean that I find all of this to be very deeply concerning. I don't think we will look back at this as being the dawn of a new, better era. Hope I'm wrong, but I'm willing to bet every dollar I have that in 10 years, I won't be.
 
Last edited:

Random8145

Registered User
Contributor
Also it is important to remember that Apple has been wrong multiple times in gauging whether there would be consumer demand for its products, with some products failing to catch on. Others have been great successes. Apple's main product is the iPhone, which accounts for over 50% of its sales.
 

JTS11

Well-Known Member
pilot
Contributor
I'm thinking back to the early 90's in high school when i got in trouble as a teenage boy for using Cliffs Notes to help write a paper on a book like Jane Eyre or Tess of the d'Urbervilles (not a fucking chance I was going to read that shit...i know they're classics, but not for a 16 yr old boy) 😁

I'm guessing the AI thing in academia is way worse than cribbing Cliffs Notes for an essay.
 

sevenhelmet

Low calorie attack from the Heartland
pilot
With all due respect Chuck, I don't need anyone at Apple/Meta/etc to inform me or attempt to force me to be interested in something that I am not. Personally, I don't view them as being particularly smarter than anyone else, just very financially motivated to shove this down our throats with no regard for the long term impacts. When I first experienced a text message in college, I thought it was the dumbest thing I had ever seen, and was astounded that people wanted to use it. So I will fully admit that I have almost zero interest in "tech" or anything in this realm. I don't think anything on my smart phone makes my life better in any way. Possibly more convenient at times, but not better. I guess I just have a much more pessimistic outlook on where this will go. I think there are some limited areas in which AI could be beneficial for humanity (health care/screening being one of the only examples), but for the most part, I see it as only bad. And by that, I mean that I find all of this to be very deeply concerning. I don't think we will look back at this as being the dawn of a new, better era. Hope I'm wrong, but I'm willing to bet every dollar I have that in 10 years, I won't be.

I don’t think you are wrong. They’re in the data aggregating stage right now, and when they have enough to make us “need” their infrastructure, the fees, subscriptions, and microcharges will start. Everything else will slowly be enshittified, just like shopping malls and first-generation social media. But you can pay your subscriptions, slip on your VR goggles, and be a cool kid.

All the noble uses for AI sound great, but we live in a world driven by Wall Street.
 

robav8r

Well-Known Member
None
Contributor
I read this over the weekend and found it very interesting on a number of levels. It's a bit long, but doable in two hours or so. The author raises some very intriguing issues . . . .
 

Attachments

  • situationalawareness.pdf
    2.5 MB · Views: 24

Ventus

Weather Guesser
pilot
I like to draw and do some writing in my personal time. I use AI and Chat GPT as a personal soundboard/pose generator. Whenever I can't figure out how to organize a story or how to position a scene, I'll mess around with prompts or generate text and mess around with the results to get closer to what I want. Then I take the building blocks and write something myself.
 

taxi1

Well-Known Member
pilot
I like to draw and do some writing in my personal time. I use AI and Chat GPT as a personal soundboard/pose generator. Whenever I can't figure out how to organize a story or how to position a scene, I'll mess around with prompts or generate text and mess around with the results to get closer to what I want. Then I take the building blocks and write something myself.
When you do that, do you ask it to take on a particular voice or tone? As if Hemingway said it, for example, or in a casual slang voice? Mil speak?
 

taxi1

Well-Known Member
pilot
I read this over the weekend and found it very interesting on a number of levels. It's a bit long, but doable in two hours or so. The author raises some very intriguing issues . . . .
Nice summary of the contents...


Here are 10 takeaways that leaped out in Aschenbrenner's 50,000-word, five-chapter, 165-page paper, "Situational Awareness: The Decade Ahead":

1. "Trust the trendlines ... The trendlines are intense, and they were right."

  • "The magic of deep learning is that it just works — and the trendlines have been astonishingly consistent, despite naysayers at every turn."
2. "Over and over again, year after year, skeptics have claimed 'deep learning won't be able to do X' and have been quickly proven wrong."

  • "If there's one lesson we've learned from the past decade of AI, it's that you should never bet against deep learning."
  • "We're literally running out of benchmarks."
3. It's "strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer."

4. "By 2027, rather than a chatbot, you're going to have something that looks more like an agent, like a coworker."

5. The data wall: "There is a potentially important source of variance for all of this: we're running out of internet data. That could mean that, very soon, the naive approach to pretraining larger language models on more scraped data could start hitting serious bottlenecks."

6. "AI progress won't stop at human-level … We would rapidly go from human-level to vastly superhuman AI systems."

  • Superintelligence, coming in 2030 A.D.?
7. AI products are likely to become "the biggest revenue driver for America's largest corporations, and by far their biggest area of growth. Forecasts of overall revenue growth for these companies would skyrocket."

  • "Stock markets would follow; we might see our first $10T company soon thereafter. Big tech at this point would be willing to go all out, each investing many hundreds of billions (at least) into further AI scaleout. We probably [will] see our first many-hundred-billion-dollar corporate bond sale."
8. "Our failure today to erect sufficient barriers around research on artificial general intelligence "will be irreversible soon: in the next 12-24 months, we will leak key AGI breakthroughs to the [Chinese Communist Party]. It will be the national security establishment's single greatest regret before the decade is out."

9. Superintelligence "will be the United States' most important national defense project."

10. There's "no crack team coming to handle this. ... Right now, there's perhaps a few hundred people in the world who realize what's about to hit us, who understand just how crazy things are about to get, who have situational awareness."

Reality check: Aschenbrenner, with roots in the effective altrusim movement, is an AI investor. So he's not a disinterested party.
 

ChuckMK23

FERS and TSP contributor!
pilot
Turns out NAVAIR is already using AI tools on the Flight Deck

https://www.navair.navy.mil/news/NA...ht-deck-safety-NOCTRNAL-research/Mon-05202024

Auto Recognition of aircraft markings. New tools for the Handler and Air Boss?

NOCTRNAL%20AircraftWithDetection.png
 
Top