Technical Magic Corporation

Technical Magic Corporation

IT System Custom Software Development

Ottawa, Ontario 15 followers

The Unlikely Paradox: Playing with Technology and Magic

About us

Techano™: The Future of Technical Wealth Welcome to Techano™, a comprehensive library of best practice concepts, cutting-edge tools, and a robust release and maintenance platform. Centered around the idea of Technical Wealth, Techano™ leverages the power of Scala, Akka, and Play Framework to deliver highly customizable, membership-based applications across various industries. Our cross-vertical core framework significantly reduces development time and long-term maintenance costs, ensuring efficiency and scalability for all your projects. At this time, Techano™ is closed to private invitations only, as we are focused on servicing a specific, highly underserved niche.

Industry
IT System Custom Software Development
Company size
2-10 employees
Headquarters
Ottawa, Ontario
Type
Privately Held
Founded
1996
Specialties
Scala, Akka, Play Framework, and AWS Cloud Services

Locations

  • Primary

    2583 Carling Ave

    Suite #M010

    Ottawa, Ontario K2B 7H7, CA

    Get directions

Employees at Technical Magic Corporation

Updates

  • A #shoutout to Kerri Quirt for her help in tracking down a bug with our system (through the Cyber Savvy Senior program that we are just starting to light up). I used to despise bugs but then I realized that doing so might not be the best way to handle them. Now instead I've learned to embrace them (and yes that is still feakin' hard at times) and let them lead me to the corresponding changes in process. If those process changes do not get made then guess what? Similar bugs come back again and again. This does not apply only to code -- this applies to just about everything in life and in business actually. I think this is that old adage about "learning the lesson" in action. As I've said many times before: "That will never happen again" simply isn't good enough. Until a deliberate change has been made then yes it will most likely happen again, and again, and again. Not going into details on this one at this time -- I think that is for another post -- it was a very interesting one since it always passed tests on every machine and mobile device I put it through but someone would occasionally fail in the wild. Of course when you get one like that (and you've triple checked everything) the first thing you think is "what the hell is wrong with THEIR equipment"? That you Kerri for your persistence in helping track this one down!

  • A massive #ShoutOut to Sergey Poltev for his collaborative work with our #MicroShifting platform for our Cyber Savvy Senior project! (CyberSavvySenior.com) Had a wonderful day today working with Sergey, and his lovely better half, recording some initial promo material for the soft launch of this project. Lots more to come as the interviews continue to be lined up. Special Thanks to Theresa Hewett and Bita Omidi for taking the time to support Sergey and this project as well! Especially since Theresa took time out on her #birthday to support this. Don't let a scammer steal your inheritance -- sign up a senior today! Look out for more from Kerri Quirt, Mireille Lavallee, Lynette O'Brien and perhaps many others. Stay tuned -- video is in the editing room next.

    Don’t Let Scammers Take Control—Protect Yourself Online Now!

    Don’t Let Scammers Take Control—Protect Yourself Online Now!

    cybersavvysenior.com

  • Well we did it #Pekko clusters are now live Even more amazing is that, even with the massive (unexpected) rewriting I ended up going through, the upgrade was completely uneventful. (I was simply expecting to add the Cluster and migrate SLOWLY, not move all operations to it in one go.) Our discovery technique even worked like a charm! Discovery? Well - here's the challenge -- in #AWS #ElasticBeanstalk EC2 units are sprung up without any pre-determined IP addresses. They can be reached via the "front door" (the load balancer) however in order to cold boot a cluster you need a list of seed nodes in order to fire things up. So we needed to A) find all the current EC2 units and B) decide who is going to be the lead dog and C) boot the cluster then D) ensure that no one is left behind. Now I did mention that we have to run the exact same code on each instance - right? When you add spot pricing optimizations that AWS allows you to do on the group of EC2 units running behind the load balancer, then you also have to factor for EC2 units that pretty much come and go as they please. So here's what we came up with: First we assign a random number to a countdown generator which we then use to send a series of exploratory pulses to the load balancer. Since the ws inside of #Play won't store cookies when you do an API call like this, then the load balancer goes into round robin mode with each successive call. The outgoing call has ALL the currently known nodes on it, which may just be "self" initially, and the response returns all known nodes. We continue to store each of these collections of nodes (as a Set obviously) and that forms the basis of the seed nodes once the timer runs out. If any other node reports that there is already an active cluster then of course the probing EC2 instance instantly joins that cluster. In our trials, thanks to the luck of the draw with randoms, we occasionally had a split brain, however, it cleared itself almost instantly as it was detected by the next pulse. Remember this is only ever going to occur on a completely cold boot. Now that the cluster is up it may actually run for decades, (remember that typical 17 year track record for the things I write?) simply migrating from each set of EC2 instances as they roll through upgrades and pricing swaps. It seems I spend inordinate amounts of time writing code that, in the end is never even noticed (in a good way). Now, with this in the rear view mirror, perhaps I can get back to writing code for things that people do notice (also in a good way!) I have so many plans 😎

  • Still crazy after all of these commits... At some point I'll get the actual stats on the work done to accomplish this. Things lie this are never hard when starting with a blank slate -- they never start with a blank slate however... 😳 😱

    Whew I think we've made it -- it doesn't count yet though -- it never counts -- not until it's a full fledged production release. The whole serialization thing was pretty intense -- it's not like I've not done work in this arena before -- after all I have close to 1,000 endpoints and maybe 30 to 40% of those are POSTs so I have data moving around the wire all the time -- it's just that there seems to be an expectation as to how it should be done -- so I went down many rabbit holes only to discover that perhaps the designers of each particular design didn't have my set of challenges in mind at the time. In short while there was a lot of expect the unexpected, somehow I still manage to enjoy surprise after surprise! Nonetheless I choose the persevere over the pivot course of action and kept pushing through any and all obstacles. The train of thought was that I're eventually going to have to resolve this so might as well hunker down and get it all done now. Given the depth of the Actor implementation I was really looking to light up a #Pekko Cluster and, here's the key thing, migrate the Actors after the fact to this model while they still ran in the mode they are in now (e.g. each server is an island unto itself with a negotiated locking mechanism for deciding who handles jobs that only need to run once). However... As I mentioned earlier (ahem) I got trapped by #DRY (Do Not Repeat Yourself) and this lead to one thing "moving over", which quickly cascaded into everything having to move over into the cluster. All was going extremely well, until I hit the wall on #serialization. It's not like I didn't know how to serialize things. It's more like if I chose path A then what will be the short term impact on writing the code and more importantly what will be the long term impact on maintaining the code, versus path B, versus Path C etc. No matter how I spliced it I always ended up with a series of lists that have to be repeated (Ugh - I really try to avoid that) I did, with a LOT of code wrangling, manage to push two primary lists directly into the #Enumerations themselves. One of the lists gives a compiler error when it is out of sync which is perfect! The second however can fail at runtime if not properly maintained. I really, really, really, tried to avoid that. Runtime errors will of course mandate exponentially expanding test cycles to try to flesh things like that out before the rubber hits the road. Hey I've already got over 2,000 unit tests now, so whats a few hundred more -- right?

  • Whew I think we've made it -- it doesn't count yet though -- it never counts -- not until it's a full fledged production release. The whole serialization thing was pretty intense -- it's not like I've not done work in this arena before -- after all I have close to 1,000 endpoints and maybe 30 to 40% of those are POSTs so I have data moving around the wire all the time -- it's just that there seems to be an expectation as to how it should be done -- so I went down many rabbit holes only to discover that perhaps the designers of each particular design didn't have my set of challenges in mind at the time. In short while there was a lot of expect the unexpected, somehow I still manage to enjoy surprise after surprise! Nonetheless I choose the persevere over the pivot course of action and kept pushing through any and all obstacles. The train of thought was that I're eventually going to have to resolve this so might as well hunker down and get it all done now. Given the depth of the Actor implementation I was really looking to light up a #Pekko Cluster and, here's the key thing, migrate the Actors after the fact to this model while they still ran in the mode they are in now (e.g. each server is an island unto itself with a negotiated locking mechanism for deciding who handles jobs that only need to run once). However... As I mentioned earlier (ahem) I got trapped by #DRY (Do Not Repeat Yourself) and this lead to one thing "moving over", which quickly cascaded into everything having to move over into the cluster. All was going extremely well, until I hit the wall on #serialization. It's not like I didn't know how to serialize things. It's more like if I chose path A then what will be the short term impact on writing the code and more importantly what will be the long term impact on maintaining the code, versus path B, versus Path C etc. No matter how I spliced it I always ended up with a series of lists that have to be repeated (Ugh - I really try to avoid that) I did, with a LOT of code wrangling, manage to push two primary lists directly into the #Enumerations themselves. One of the lists gives a compiler error when it is out of sync which is perfect! The second however can fail at runtime if not properly maintained. I really, really, really, tried to avoid that. Runtime errors will of course mandate exponentially expanding test cycles to try to flesh things like that out before the rubber hits the road. Hey I've already got over 2,000 unit tests now, so whats a few hundred more -- right?

  • In the spirit of the Lean Startup concept of “pivot or persevere,” my experience with #Pekko Clusters has been an exercise in perseverance through unexpected complexities. Clustering is an advanced topic, and while nearly every component can be made to work independently, there’s one critical piece that doesn’t truly reveal itself until the very end: serialization. Serialization is far more intricate than first anticipated, full of subtle pitfalls and unexpected rabbit holes, especially in #Scala. If I could put a giant stop sign on the front of the #Pekko main website, it would read, “Don’t even think about #clusters until you’ve mastered #serialization.” It needs to be integrated from the design level, not something to tackle after a codebase is built, because the nuances of the language make retrofitting serialization a major undertaking. Working with a large, established codebase felt like driving a massive truck toward a cliff—slamming on the brakes just in time, only to realize in the mirror that the trailer has jackknifed and is heading over the edge anyway and taking me along for the ride. I faced choice: revert the changes and start fresh with serialization baked in or push forward to solve the problem here and now, armed with lessons learned. I chose the latter. I can always revert later on if this is a complete dead end and after all I have to solve this issue once and for all anyways. Might as well solve it now or at least know what the current unsolvables or insurmountables are for the second run up the mountain. It’s a challenging place to be, but solving serialization properly (and elegantly!) across numerous classes that were initially designed without serialization in mind is a hurdle I need to clear with this code base at some point. With each step forward, I'm refining my approach and ensuring my next attempt is grounded in a much deeper understanding of what clustering truly requires. It's amusing to note that a LOT of what #AI spits up is wrong, very wrong -- it simply hasn't grasped the elegance of good design yet. Most of it's examples wanted to create at least FOUR lists (two in the form of nested ifs or case statements and/or an additional class per trait with those self-same matches) all of which would have to be maintained in lock step WITHOUT any compiler warnings or errors if anything was missed. That would be brutal for "the next guy into the code" and really breaks the #DRY principle. I've managed to get it down to TWO lists thus far and the second one (inside the main application conf file) is done at the trait level so it already encompasses a LOT of automatic future-proofing. FYI: I want to give a massive shout out to this project: https://lnkd.in/e5uADqjf -- I've used it extensively for a very long time and now I'm converting a lot of my sealed traits that I used for the #Actors to #Enumerations using this library.

    GitHub - lloydmeta/enumeratum: A type-safe, reflection-free, powerful enumeration implementation for Scala with exhaustive pattern match warnings and helpful integrations.

    GitHub - lloydmeta/enumeratum: A type-safe, reflection-free, powerful enumeration implementation for Scala with exhaustive pattern match warnings and helpful integrations.

    github.com

  • So the #Pekko saga continues Turns out that I instinctively built plain actors into a design pattern that mimics the routers that Pekko uses Once I realized that things really started to make a lot more sense — up till then it seemed like I was working on the wrong problem — I was trying to connect the plumbing to the wrong layers or so it seemed I managed to get everything working (except for ONE thing which I’ll talk about in the next post as it’s a bit of a show stopper) and deep some deep dive debugging to fully understand the routers and what each node on the cluster was and was not doing at any given moment I like to understand code completely before I let it out of the lab — way to easy to have something work by accident instead of by design So all is good until… Stay tuned for the next installment as I might have to back out of a ton of work at this point and I really wish the documentation had a great big hazard sign on the front end as you have to do ALL the work of a conversion before this becomes apparent that it is a BIG deal and really needed to be the starting point of the conversion not the stalking point Documenting this to hopefully give others the heads up before they also fall off this cliff…

  • I’m working on #Pekko #Clusters, and interestingly enough, they don’t appear to be functioning as documented. I always try to understand all the code that I write to achieve precisely the expected results, but my tests yesterday showed a disconnect between what I thought the code should be doing and the actual outcomes. This could be due to a variety of reasons, and I see it as a good thing, an opportunity for growth in understanding of the platform. Inspired by “The Lean Startup,” I apply a similar philosophy to coding: writing tests and verifying outcomes to ensure the code behaves as intended. When it doesn’t, it’s a learning opportunity. Today’s focus is on running more tests and scenarios to understand the discrepancy between expected and actual results. I've discovered that Pekko Clusters don’t seem to operate quite like the unified whole that one might expect. Different instances of the same cluster are behaving differently, which is puzzling. My goal today is to dive deep, analyze the code, and uncover why this inconsistency is occurring. PS: As frustrating as this appears to be now I know that this is saving massive frustrations later on in production that would have resulted if the code had just "accidentally" or "incidentally" worked "as expected".

  • Still working on the discovery sequence for #Pekko Clusters running inside of #Play Framework running directly on #AWS Elastic Beanstalk (no containers!) -- very interesting problem to solve! I did manage to solve the problem of discovering that it has booted into a split brain situation and have the clusters self-repairing into a single cluster. Today I plan on getting my discovery probes to record the instances they discover, even if they are not booted up yet, and use those discovered nodes as seed nodes and not just "self" (cluster.join(cluster.selfAddress)) If we also allow any discovered, yet still inactive, nodes to pass along their collection of seed nodes this should make it very difficult to get into the split brain boot situations we've been observing up till now in dev trials. Of course the moment a node "decides" to be king then anytime it is discovered the discovering node would auto-join. Now however tf would have decided to become king with, potentially, a collection of seed nodes which (presumably) instantly eliminates them from the competition. It's really interesting writing code that, by it's nature is identical on each instance, (E.g.: the environment gives us no clues as to who's who) and yet has to come to completely different outcomes. Lots of trials planned for today!

Similar pages