An excellent reminder that Latin literature has had a huge influence on later literature and mentality, including the medieval ones. Well done! Jenyth Evans
An insightful publication by Doctor Amos Fox, Ph.D., here are my key takeaways:
■ 𝐎𝐛𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐯𝐞 𝐖𝐚𝐫𝐟𝐚𝐫𝐞 𝐂𝐨𝐧𝐜𝐞𝐩𝐭: As a strategy that disrupts adversaries' data and operational tempo, hindering their ability to execute long-range, stand-off attacks effectively.
■ 𝐀𝐈’𝐬 𝐃𝐮𝐚𝐥 𝐈𝐦𝐩𝐚𝐜𝐭: AI can improve decision-making speed and situational awareness but may also increase vulnerability by flooding decision-makers with excessive data, potentially leading to deception risks and operational overload.
■ 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 𝐢𝐧 𝐋𝐚𝐧𝐝 𝐖𝐚𝐫𝐟𝐚𝐫𝐞: The inherent complexities of land combat, such as territorial control and protection of populations, cannot be fully resolved by AI, which struggles with applied scenarios outside of controlled environments.
■ 𝐑𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐃𝐨𝐜𝐭𝐫𝐢𝐧𝐞 𝐚𝐧𝐝 𝐏𝐨𝐥𝐢𝐜𝐲: Emphasizes the need for military doctrines to integrate AI thoughtfully across data, tempo, and kinetic operations, with clear guidelines on its ethical use and interoperability among allied forces.
■ 𝐈𝐧𝐭𝐞𝐫𝐨𝐩𝐞𝐫𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬: The lack of standardized policies across allied nations on the use and governance of AI in warfare poses significant ethical and operational challenges for coalition-based strategies.
Professor of Practice at Arizona State University's Future Security Initiative and School of Politics and Global Studies | Managing Editor at Small Wars Journal | Contributing Editor at War on the Rocks
An insightful publication by Doctor Amos Fox, Ph.D., here are my key takeaways:
■ 𝐎𝐛𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐯𝐞 𝐖𝐚𝐫𝐟𝐚𝐫𝐞 𝐂𝐨𝐧𝐜𝐞𝐩𝐭: As a strategy that disrupts adversaries' data and operational tempo, hindering their ability to execute long-range, stand-off attacks effectively.
■ 𝐀𝐈’𝐬 𝐃𝐮𝐚𝐥 𝐈𝐦𝐩𝐚𝐜𝐭: AI can improve decision-making speed and situational awareness but may also increase vulnerability by flooding decision-makers with excessive data, potentially leading to deception risks and operational overload.
■ 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 𝐢𝐧 𝐋𝐚𝐧𝐝 𝐖𝐚𝐫𝐟𝐚𝐫𝐞: The inherent complexities of land combat, such as territorial control and protection of populations, cannot be fully resolved by AI, which struggles with applied scenarios outside of controlled environments.
■ 𝐑𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐃𝐨𝐜𝐭𝐫𝐢𝐧𝐞 𝐚𝐧𝐝 𝐏𝐨𝐥𝐢𝐜𝐲: Emphasizes the need for military doctrines to integrate AI thoughtfully across data, tempo, and kinetic operations, with clear guidelines on its ethical use and interoperability among allied forces.
■ 𝐈𝐧𝐭𝐞𝐫𝐨𝐩𝐞𝐫𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬: The lack of standardized policies across allied nations on the use and governance of AI in warfare poses significant ethical and operational challenges for coalition-based strategies.
Professor of Practice at Arizona State University's Future Security Initiative and School of Politics and Global Studies | Managing Editor at Small Wars Journal | Contributing Editor at War on the Rocks
This piece is a must read. I have been trying to articulate much of what Dr Amos Fox, Ph.D. so thoroughly and eloquently puts forth in this piece.
If you are involved at all, no matter how small, in the prospects of emerging technologies and AL/ML integrating into warfare...you owe it to yourself and the organization you work for to not only read this piece...but discuss its implications with your leadership.
Based on the circles and the conversations I've been involved in, there are far too many putting an inordinate amount of "faith" or stock into AI/ML significantly giving us an advantage in the close in and tactical battle space. That is not to say there will not be some advantages. But even with these potential advantages, it remains to be seen if they will not open up their own negative side effects. Thus I believe many are committing a bit of wishful thinking. Dr. Fox does a great job of spelling that out. Bolstering this argument is the newly released White House AI Framework, which states among other things that we must:
- Train and assess the AI system’s operators, who must have, at a minimum, appropriate training on the specific AI use case, product, or service, including its limitations, risks, and expected modes of failure, as well as a general knowledge of how the AI system functions in its deployment context.
- Ensure appropriate human consideration and/or oversight of AI-based decisions or actions, including by establishing clear human accountability for such decisions and actions and maintaining appropriate processes for escalation and senior-leadership approval
Keep those two points of the Framework in mind as you read Dr. Fox's piece and ponder the true transformational effect AI can have in the battlespace.
Professor of Practice at Arizona State University's Future Security Initiative and School of Politics and Global Studies | Managing Editor at Small Wars Journal | Contributing Editor at War on the Rocks
Associate Director of Stanford Program on Research Rigor and Reproducibility
EASE Peer Review Committee's new toolkit entry has been published - 𝐇𝐨𝐰 𝐭𝐨 𝐀𝐬𝐬𝐞𝐬𝐬 𝐏𝐞𝐞𝐫 𝐑𝐞𝐯𝐢𝐞𝐰 𝐐𝐮𝐚𝐥𝐢𝐭𝐲. See our recommendations for editors below.
https://lnkd.in/gGEpGZtD
A cool trick for improving retrieval quality for RAG is to include retrieval-evals in-the-loop.
Given a set of retrieved results, use an LLM evaluator to decide how relevant each context is to the query, before synthesizing an answer.
You can use this to filter results, augment context from “backup” sources, and more.
This core idea was proposed in the recent CRAG paper (Corrective Retrieval Augmented Generation) by Yan et al., and now it’s available as a LlamaPack thanks to Ravi Theja Desetty! Check it out 👇
https://lnkd.in/gwj6_zDW
Paper: https://lnkd.in/gtbwiE2C
Interesting article about further improving context retrieval for RAG applications. It introduces additional steps that distill and disambiguate retrieved data through reasoning and additional search.
Worth a read!
HeyIris.AI
A cool trick for improving retrieval quality for RAG is to include retrieval-evals in-the-loop.
Given a set of retrieved results, use an LLM evaluator to decide how relevant each context is to the query, before synthesizing an answer.
You can use this to filter results, augment context from “backup” sources, and more.
This core idea was proposed in the recent CRAG paper (Corrective Retrieval Augmented Generation) by Yan et al., and now it’s available as a LlamaPack thanks to Ravi Theja Desetty! Check it out 👇
https://lnkd.in/gwj6_zDW
Paper: https://lnkd.in/gtbwiE2C
I am working on the final review: Which version is better, and how can you create your own rules and uncover hidden secrets with output results comparison..
1st blog link here : https://lnkd.in/gPtkMkkr
A cool trick for improving retrieval quality for RAG is to include retrieval-evals in-the-loop.
Given a set of retrieved results, use an LLM evaluator to decide how relevant each context is to the query, before synthesizing an answer.
You can use this to filter results, augment context from “backup” sources, and more.
This core idea was proposed in the recent CRAG paper (Corrective Retrieval Augmented Generation) by Yan et al., and now it’s available as a LlamaPack thanks to Ravi Theja Desetty! Check it out 👇
https://lnkd.in/gwj6_zDW
Paper: https://lnkd.in/gtbwiE2C
A cool trick for improving retrieval quality for RAG is to include retrieval-evals in-the-loop.
Given a set of retrieved results, use an LLM evaluator to decide how relevant each context is to the query, before synthesizing an answer.
You can use this to filter results, augment context from “backup” sources, and more.
This core idea was proposed in the recent CRAG paper (Corrective Retrieval Augmented Generation) by Yan et al., and now it’s available as a LlamaPack thanks to Ravi Theja Desetty! Check it out 👇
https://lnkd.in/gwj6_zDW
Paper: https://lnkd.in/gtbwiE2C
The below chat is from this awesome paper https://buff.ly/4exxRtJ which is about corrective RAGs but one of the primary points reinforced by the authors which has not become one of the strongest heuristic in Industry implementation of RAG systems is does not matter how big your context window is you are probably always better off building a generation system on top of retrieval.
Associate (Litigation & Dispute Resolution) at Clifford Chance
1wahhh so exciting - massive congrats!!