🚀 Unsere Paperpräsentationen auf der ICIS – The International Conference on Information Systems in Bangkok 📅 Dienstag, 17. Dezember 2024 ➡️Beyond Trial and Error: Strategic Assessment of Decentralized Identity in US Healthcare 👥 Sophia Goeppinger, Alexander Meier, Edona Elshan, Omid Malekan und Jan Marco Leimeister 🕒 Wann? 8:30–10:00 Uhr, Track: Information Systems in Healthcare ✨ Best Paper Nominee ➡️ GenAI and Software Engineering: Strategies for Shaping the Core of Tomorrow’s Software Engineering Practice 👥 Olivia Bruhin, Philipp Ebel, Leon Müller und Mahei M. Li 🕒 Wann? 8:30–10:00 Uhr, Session: Emerging Technologies in Practice ✨ Präsentation beim CIO-Forum als eines von 3 ausgewählten Papern ➡️ Beyond Code: The Impact of Generative AI on Work Systems in Software Engineering 👥 Olivia Bruhin 🕒 Wann? 15:00–16:30 Uhr, Session: On Professionals and Professions ➡️ Chatbot Agents Displaying Non-factive Reasoning Enhance Expectation Confirmation 👥 Andreas Göldi & Roman Rietsche 🕒 Wann? 15:00–16:30 Uhr, Session: HTI Human-AI Interaction and Communication ➡️ Leveraging Prompting Guides as Worked Examples for Advanced Prompt Engineering Strategies 👥 Antonia Tolzin, Nils Knoth und Andreas Janson 🕒 Wann? 16:45–18:15 Uhr, Session: Conversational Agents as Partners in Learning ✨ Best Paper Nominee und Best Paper in Track 📅 Mittwoch, 18. Dezember 2024 ➡️ Cognitive Load Theory Approach to Hybrid Intelligence: Tackling the Dual Aim of Task Performance and Learning 👥 Eva Bittner, Sarah Oeste-Reiß, Rosemarie Kirmse, Mathis Poser und Dr. Christina Wiethof 🕒 Wann? 9:00–10:30 Uhr, Session: Methods and Techniques for Impacting Learning Outcomes ➡️ Exploring Individuals’ Psychological Factors as Predictors of Workforce Agility in Software Development Teams 👥 Kristin Geffers, Ulrich Bretschneider und Dr. Karen Eilers 🕒 Wann? 9:00–10:30 Uhr, Session: Intra- and Interpersonal Aspects of IT Workforce ➡️ Exploring the Paradoxical Relationship between Citizen and Professional Developers in Low Code Environments 👥 Olivia Bruhin, Philipp Ebel und Edona Elshan 🕒 Wann? 9:00–10:30 Uhr, Session: Challenges in Software Development ✨ Nominiert für ICIS 2024 Best Student Paper (In Honor of TP Liang) ➡️ Making Sense of Large Language Model-Based AI Agents 👥 Andreas Göldi & Roman Rietsche 🕒 Wann? 9:00–10:30 Uhr, Session: AI Design and Technology Wir freuen uns auf spannende Diskussionen vor Ort! 🇹🇭
Wirtschaftsinformatik Universität Kassel | Information Systems University of Kassel’s Post
More Relevant Posts
-
📢 Looking forward to presenting my current paper at ICIS – The International Conference on Information Systems on the 18th of December! And this in one of my favorite destinations, #Thailand, and one of the most thrilling cities, #Bangkok. Excited for the exchange on topics around #AgileTransformation, #AgileSoftwareDevelopment, #AgileLeadership, and #AgileMindset. See you there! 🚀🌍 *** Title: Exploring Individuals’ Psychological Factors as Predictors of Workforce Agility in Software Development Teams What’s New? Our study uniquely examines how psychological factors such as attitudes, feelings, and perceptions influence workforce agility in software development teams. We provide initial evidence that an individual's agile mindset significantly impacts workforce agility. We identify psychological empowerment as a predictor of workforce agility, validating prior insights through a qualitative approach. Additionally, we investigate how perceived error culture, self-organization, and leaders' trust enhance the relationship between agile mindset and workforce agility. So, what? Our study expands the understanding of psychological predictors influencing workforce agility, addressing an existing gap. We present a new theoretical model explaining the interconnections between agile mindset, psychological empowerment, working conditions, and workforce agility. We offer actionable insights for managers and HR to enhance workforce agility in software development teams by focusing on attitudes, feelings, and perceptions. Special thanks to Ulrich Bretschneider and Dr. Karen Eilers for the great cooperation.
🚀 Unsere Paperpräsentationen auf der ICIS – The International Conference on Information Systems in Bangkok 📅 Dienstag, 17. Dezember 2024 ➡️Beyond Trial and Error: Strategic Assessment of Decentralized Identity in US Healthcare 👥 Sophia Goeppinger, Alexander Meier, Edona Elshan, Omid Malekan und Jan Marco Leimeister 🕒 Wann? 8:30–10:00 Uhr, Track: Information Systems in Healthcare ✨ Best Paper Nominee ➡️ GenAI and Software Engineering: Strategies for Shaping the Core of Tomorrow’s Software Engineering Practice 👥 Olivia Bruhin, Philipp Ebel, Leon Müller und Mahei M. Li 🕒 Wann? 8:30–10:00 Uhr, Session: Emerging Technologies in Practice ✨ Präsentation beim CIO-Forum als eines von 3 ausgewählten Papern ➡️ Beyond Code: The Impact of Generative AI on Work Systems in Software Engineering 👥 Olivia Bruhin 🕒 Wann? 15:00–16:30 Uhr, Session: On Professionals and Professions ➡️ Chatbot Agents Displaying Non-factive Reasoning Enhance Expectation Confirmation 👥 Andreas Göldi & Roman Rietsche 🕒 Wann? 15:00–16:30 Uhr, Session: HTI Human-AI Interaction and Communication ➡️ Leveraging Prompting Guides as Worked Examples for Advanced Prompt Engineering Strategies 👥 Antonia Tolzin, Nils Knoth und Andreas Janson 🕒 Wann? 16:45–18:15 Uhr, Session: Conversational Agents as Partners in Learning ✨ Best Paper Nominee und Best Paper in Track 📅 Mittwoch, 18. Dezember 2024 ➡️ Cognitive Load Theory Approach to Hybrid Intelligence: Tackling the Dual Aim of Task Performance and Learning 👥 Eva Bittner, Sarah Oeste-Reiß, Rosemarie Kirmse, Mathis Poser und Dr. Christina Wiethof 🕒 Wann? 9:00–10:30 Uhr, Session: Methods and Techniques for Impacting Learning Outcomes ➡️ Exploring Individuals’ Psychological Factors as Predictors of Workforce Agility in Software Development Teams 👥 Kristin Geffers, Ulrich Bretschneider und Dr. Karen Eilers 🕒 Wann? 9:00–10:30 Uhr, Session: Intra- and Interpersonal Aspects of IT Workforce ➡️ Exploring the Paradoxical Relationship between Citizen and Professional Developers in Low Code Environments 👥 Olivia Bruhin, Philipp Ebel und Edona Elshan 🕒 Wann? 9:00–10:30 Uhr, Session: Challenges in Software Development ✨ Nominiert für ICIS 2024 Best Student Paper (In Honor of TP Liang) ➡️ Making Sense of Large Language Model-Based AI Agents 👥 Andreas Göldi & Roman Rietsche 🕒 Wann? 9:00–10:30 Uhr, Session: AI Design and Technology Wir freuen uns auf spannende Diskussionen vor Ort! 🇹🇭
To view or add a comment, sign in
-
If you're working with LLMs or care about building production-grade applications using LLMs, this course is a must. It features contributions from industry leaders such as Hamel H., Jeremy Howard, Sophia Yang, Ph.D., Simon Willison, and JJ Allaire, who bring expertise from companies like Fast.ai, Anaconda, and others. Covering diverse topics like Retrieval-Augmented Generation (RAG), fine-tuning, evaluation methods, and prompt engineering, the course offers practical insights and best practices, making it distinctively practical and comprehensive.
To view or add a comment, sign in
-
Diana Montalion, author of Learning Systems Thinking: Essential Nonlinear Skills & Practices for Software Professionals, challenges us to rethink how we approach software development in an age of exponential change. In this talk, Diana examines the staggering effects of digital information systems on relational complexity and highlights the limitations of mechanistic, industrial thinking in software design. She invites us to embrace systems thinking—not just through tools like Kubernetes, but by fundamentally transforming the way we think and approach problems. Drawing on insights from Robert Pirsig, Diana discusses the mindshifts required to thrive in this era of nonlinear change and complexity. Check out the video here: https://lnkd.in/eyeNmbjv 💬 About the Speaker: Diana Montalion is a seasoned software professional with over 20 years of experience engineering and architecting systems for organizations such as Stanford, The Gates Foundation, Memorial Sloan Kettering, and Teach For All. She has served as Principal Systems Architect for The Economist and The Wikimedia Foundation. Through her company, Mentrix, Diana creates learning materials for nonlinear thinkers and designs modern software systems for diverse clients. She lives in New York’s Hudson Valley with her three dogs, one cat, and nine chickens. 📹 About the Clip: This video debuted during Emerging Tech East 2024 (formerly Philly ETE). The conference brought together world-class speakers to share insights on leading-edge and emerging technologies. The conference has become one of the premier gatherings of developers. 🖥️ About Chariot Solutions: For over 20 years, companies have looked to us as a partner to solve their toughest challenges and move their business forward. We solve complex problems with technology, working with a winning team of engineers who value relationships built on trust and common goals.
To view or add a comment, sign in
-
I keep thinking how gen ai is changing the software development practices. In unexpected ways. First, how do developers actually think and feel when coding and how infection of AI changes that? There are research papers that measured the former. For example: Thomas, Patricia. (2006). Cognitive Absorption: Its antecedents and effect on user intentions to use technology https://lnkd.in/ezC3-iEa The author looked at cognitive states what explain "technology acceptance" or "user acceptance" in IT. In the area of software development - this translates to "developers adoption". So what cognitive states are there? I took the definitions from the paper and tried to think how it looks like when coding. They look very familiar! Temporal Dissociation: developers perceive ample time to write, with better focus and productivity because they lose track of time. Also known as “in the zone” Focused Immersion: write faster and more accurately because full engagement reduces cognitive burden. Heightened Enjoyment: developers improve code quality over time because they enjoy coding and are more likely to revisit code. Control: developers make quicker and more confident decisions because they feel in charge of the coding process. Curiosity: developers create good solutions because curiosity drives exploration and understanding the code deeply. Ok, so what's the point here? Well, if there is Gen AI between developer and the code - then these states pretty much go out of the window. If I use Gen AI for coding, the state I am in is "zone" (the temporal dissociation). Ai helpers are worst than the “pair-programming”. Focused immersion goes out of the window, too. Gen ai creates INCREASED cognitive load and makes lots of interruptions. Unless of course you just copy/pasting. First time enjoyment of generative AI was a big thing but it is pretty much dissipated. Control is interesting one. I'm definitely a layer more insulated from the code with all the prompting. There is way less control when using gen ai just by the nature how these systems work. Curiosity is a big thing. First, it requires a lot of effort - exploring code is more daunting than writing. I think that's where Gen AI is somewhat improving things. I definitely look at more code, more languages, programming styles etc. I'm trying to summarize it for myself: with Gen AI, I get - way less "in the zone" state - immersion is worse, I noticed how much effort it requires to start looking at the generated rather than written code - enjoyment stays the same because I'm trying to manage what and how I do with gen ai or limit use of it for coding - control: omg this is going to be the biggest surprise! Lost control of project base - curiosity: kinda increased with the broadened horizons but I'm not sure about quality I'll be looking on research papers in this area, want to see the trends. #softwaredevelopment #coding #genai #cognitive
Cognitive Absorption: Its antecedents and effect on user intentions to use technology
core.ac.uk
To view or add a comment, sign in
-
Happy to announce my first blog published by ClearRoute in our Engineering blog. These come from our Community of Practice sessions we have where we share/discuss topics in technology or skills we have learnt and now we can share these to you all! Please have a read and let me know what you think https://lnkd.in/e-mACSdU
Top 5 AI Tools for Software Engineering 2024
engineering.clearroute.io
To view or add a comment, sign in
-
Very very interesting find: most devs using LLM tools for 6+ months see productivity increases, but most eng leaders where their teams use these tools do not see team-level increases. Why? As context, this finding comes from this week's The Pragmatic Engineer deepdive where with Elin Nilsson we analyzed 216 responses from software engineers and engineering leaders on their usage of LLM tools for software engineering. My theories on why this disreptancy: 1. Productivity translates to better work-life balance. People could well be more productive, but the gains are consumed in other tasks. If GenAI helps save a few minutes off a longer task, there’s more time for a walk, a coffee, go home earlier, etc. 2. GenAI mostly aids simpler work. GenAI helps with coding tasks in large organizations, but coding is often the smallest of all work! Alignment, meetings, and talking with others often takes up more time. GenAI won’t change this. 3. Maybe it's just hard to measure? As an engineering leader: consider that GenAI can have a positive impact without the impact being measured! We have ample evidence via other studies (e.g. this peer reviewed one: https://lnkd.in/eCgXfyRw) that GenAI tools are genuinely helpful for software engineers. And while most engineering leaders have the perception that these tools don’t make a big difference at the team level: I’d urge all engineering leaders to look closer before declaring that there really is no difference! The full article where the image is from, with more findings: https://lnkd.in/eBkrxDJA
To view or add a comment, sign in
-
Definitely an interesting read on the impact of GenAI on perceived productivity and other benefits.
Well researched insights on software engineering, tech careers and industry trends. Writing The Pragmatic Engineer, the #1 technology newsletter on Substack. Author of The Software Engineer's Guidebook.
Very very interesting find: most devs using LLM tools for 6+ months see productivity increases, but most eng leaders where their teams use these tools do not see team-level increases. Why? As context, this finding comes from this week's The Pragmatic Engineer deepdive where with Elin Nilsson we analyzed 216 responses from software engineers and engineering leaders on their usage of LLM tools for software engineering. My theories on why this disreptancy: 1. Productivity translates to better work-life balance. People could well be more productive, but the gains are consumed in other tasks. If GenAI helps save a few minutes off a longer task, there’s more time for a walk, a coffee, go home earlier, etc. 2. GenAI mostly aids simpler work. GenAI helps with coding tasks in large organizations, but coding is often the smallest of all work! Alignment, meetings, and talking with others often takes up more time. GenAI won’t change this. 3. Maybe it's just hard to measure? As an engineering leader: consider that GenAI can have a positive impact without the impact being measured! We have ample evidence via other studies (e.g. this peer reviewed one: https://lnkd.in/eCgXfyRw) that GenAI tools are genuinely helpful for software engineers. And while most engineering leaders have the perception that these tools don’t make a big difference at the team level: I’d urge all engineering leaders to look closer before declaring that there really is no difference! The full article where the image is from, with more findings: https://lnkd.in/eBkrxDJA
To view or add a comment, sign in
-
From the AI4Software Spanish Research Network Research Network, we are pleased to present the paper "On the interaction between the search parameters and the nature of the search problems in search-based model-driven engineering", published in the journal Software: Practice and Experience. This achievement is the result of the collaboration between two nodes of the network: Universidad San Jorge and Universitat Politècnica de València (UPV). In this paper, the authors (Isis Roca Mainer, Jaime Font Burdeus, Lorena Arcega Rodríguez, and Carlos Cetina) evaluate the impact of different search parameter values on the performance of an evolutionary algorithm whose population is in the form of software models. The evaluation includes 1895 model fragment location problems (characterized by five accepted measures) from two industrial case studies and uses 625 different combinations of search parameter values. The search-based model-driven engineering community could benefit from the outcome of this work by accounting for the influence of search parameter values and the nature of model fragment location problems in their studies. Further details of this work can be found at: https://lnkd.in/dzjYRAyN. In addition, this work will be presented as part of #CEDI, in the "Arquitecturas Software y Variabilidad" (ASV) track of the XXVIII "Jornadas de Ingeniería del Software y Bases de Datos" (JISBD), on Monday 17 June, from 10:30 to 12:00, Room A.3.1.
On the interaction between the search parameters and the nature of the search problems in search‐based model‐driven engineering
onlinelibrary.wiley.com
To view or add a comment, sign in
-
🚀 Embracing Innovation: Exploring the Uni-OVSeg Framework in Semantic Segmentation 🤖 Hey fellow Software Architects! 👋 Today, I wanted to share my recent journey with the Uni-OVSeg framework and how it transformed my approach to semantic segmentation. Let's dive in! ### Unpacking Uni-OVSeg: A Game-Changer in Semantic Segmentation In a world where annotation costs can be prohibitive, Uni-OVSeg offers a refreshing solution. By utilizing unpaired mask-text pairs, this framework slashes the need for labor-intensive image-mask-text triplet annotations. This shift not only streamlines the annotation process but also enhances efficiency in open-vocabulary segmentation. ### Navigating Challenges: Lessons Learned and Insights Gained Throughout my experience with Uni-OVSeg, I encountered both triumphs and hurdles. From optimizing model training to fine-tuning segmentation accuracy, each decision led to valuable insights. Embracing these challenges head-on not only honed my technical skills but also enriched my problem-solving capabilities. ### Growth Mindset: Embracing Change for Future Projects As I reflect on this journey, I am inspired to approach future projects with a renewed mindset. The lessons learned from Uni-OVSeg have shaped my perspective on innovation and resilience. Moving forward, I am excited to implement these insights to drive progress and excellence in my work. 🔑 Key Takeaways: - Embrace innovation to tackle complex challenges efficiently. - Continuous learning and adaptation
To view or add a comment, sign in
-
#30DaysOfFLCode Challenge by OpenMined – Day 2 Update Today, I read the article Federated Learning: Challenges, Methods, and Future Directions by Li et al (https://lnkd.in/dTHS7rHd). My key takeaway from the paper is the four main challenges identified by the authors for federated learning (FL): 1. Expensive communication: Communication in federated settings is a bottleneck due to the need to keep data local and the potentially vast number of devices involved. Efficient communication methods must aim to reduce either the number of communication rounds or the size of transmitted updates. 2. Systems heterogeneity: Devices in federated settings vary widely in hardware, power, and internet connectivity, which may often be unreliable. FL methods must be designed to handle low participation rates, adapt to hardware diversity, and remain robust to device dropouts. 3. Statistical heterogeneity: Data across devices is often non-identically distributed, with also significant variations in the amount of data possessed by each device. This challenges standard modeling approaches and necessitates techniques that support personalized or device-specific models. 4. Privacy concerns: While FL protects raw data by sharing model updates, these updates can still reveal sensitive information. Techniques like differential privacy, homomorphic encryption, and secure multiparty computation address this but often trade off performance or efficiency, requiring a careful balance. All FL scenarios face at least a subset of these challenges. For instance, a next-word prediction model, a common application of FL, suffers from virtually all four challenges. Training a prediction model using data from a few hospitals, another common application, is primarily affected by challenges 3 and 4. Over the next few days, I will dive deeper into these four topics, with a special focus on challenges 3 and 4. ---------- For 30 days, I’m dedicating at least one hour each day to exploring FL: diving into its concepts, applications, and implications. I’m documenting my journey and learnings here: https://lnkd.in/esj_93Ss Let’s connect if you’re also curious about FL or have insights to share. #30DaysOfFLCode #FederatedLearning #MachineLearning
To view or add a comment, sign in
402 followers