Namaste FPGA Technologies

Namaste FPGA Technologies

Professional Training and Coaching

Mumbai, Maharashtra 6,632 followers

Empowering Tomorrow's Innovators through Specialized FPGA Training for Semiconductor Applications

About us

We offer a comprehensive learning path designed specifically for Front-End VLSI enthusiasts. Our curated program takes you from the fundamentals to advanced topics, ensuring a smooth and successful learning journey. We understand the challenges of navigating the vast world of VLSI learning. Namaste FPGA solves this problem by providing a structured curriculum with courses sequenced for optimal learning. We emphasize practical skills by offering courses with a 95% coding focus and 5% theory, allowing you to learn by doing and solidifying your understanding. We've helped over 50K students on Udemy master Front-End VLSI since 2019. We even made UVM training super affordable ($5!), unlike others charging crazy amounts ($100-$500) making it accessible to everyone. Imagine having HackerRank's challenges, Udemy's in-depth lessons, and Internshala's internships – all rolled into one platform! That's a Namaste FPGA. It's easy to use and affordable, offering everything you need to master Front-End VLSI. Namaste FPGA offers : Low-latency microservice architecture for an uninterrupted learning experience, Best-in-class user data encryption for security, Always-available cloud-native application, Secure payment methods compliant with PCI-DSS, ISO 27001, and SOC 2, Modern UI with mobile-friendly design, Integration of dedicated Discord servers for 24/7 connectivity with instructors, 48-hour turnaround time for all support inquiries, Curated learning paths for Design, Verification, and SoC, Courses on Essential Job Skills (RTL Design & Verification) + Foundational Skills + Soft Skills, Remote Internship available for all participants, Verified Certificate of Completion, Coding Exercises Coding Contests with unique badges, Learn at your Pace, Affordable and fixed pricing for all courses.

Industry
Professional Training and Coaching
Company size
2-10 employees
Headquarters
Mumbai, Maharashtra
Type
Partnership
Founded
2024
Specialties
RTL Design, RTL Verification, Formal Verification, SoC, Verilog, SystemVerilog, and UVM

Locations

Employees at Namaste FPGA Technologies

Updates

  • Namaste FPGA Technologies reposted this

    Don't try to work directly with global signals to avoid interdependency. Global signals, such as I/O ports or reg variables, can be accessed throughout a module, which may lead to dependencies as multiple blocks could modify their values simultaneously, making it difficult to predict the final value of the signal. Additionally, debugging becomes challenging because multiple always blocks can modify the same signal, leading to unpredictable behavior. This reduces the maintainability and reusability of the design. A better approach to handle this situation is to use temporary variables. Instead of writing directly to a global signal, store the required value in temporary variables within individual tasks, functions, or always blocks. Then, use an independent block to decide the final value of the global signal based on the requirements. Using local signals and modular encapsulation ensures that each module operates independently, adhering to good design practices. By explicitly specifying how signals are modified, the design becomes easier to debug and more maintainable. Learn more about other Linting rules from our begineer friendly courses : https://lnkd.in/d-vuAe_x

    • No alternative text description for this image
  • Don't try to work directly with global signals to avoid interdependency. Global signals, such as I/O ports or reg variables, can be accessed throughout a module, which may lead to dependencies as multiple blocks could modify their values simultaneously, making it difficult to predict the final value of the signal. Additionally, debugging becomes challenging because multiple always blocks can modify the same signal, leading to unpredictable behavior. This reduces the maintainability and reusability of the design. A better approach to handle this situation is to use temporary variables. Instead of writing directly to a global signal, store the required value in temporary variables within individual tasks, functions, or always blocks. Then, use an independent block to decide the final value of the global signal based on the requirements. Using local signals and modular encapsulation ensures that each module operates independently, adhering to good design practices. By explicitly specifying how signals are modified, the design becomes easier to debug and more maintainable. Learn more about other Linting rules from our begineer friendly courses : https://lnkd.in/d-vuAe_x

    • No alternative text description for this image
  • Namaste FPGA Technologies reposted this

    UVM_Components are Quasi-static in nature rather than Purely Static? There are two types of phases in UVM: phases used by the simulator and phases used by UVM environemnt to perform functional verification of the DUT. Both are distinct and have specific purposes in the verification process. The simulator consists of compilation, elaboration, simulation execution, and post-simulation. The UVM environment itself has its own set of phases, which include build phases (such as build_phase, connect_phase, end_of_elaboration_phase, and start_of_simulation_phase), the run phase, and finally the cleanup phases (extract_phase, check_phase, and report_phase). Both the simulator and UVM environment phases play crucial roles in verifying the DUT. Let us understand the purpose of each simulator phase: a) Compilation: This phase converts HDL code (design and testbench) into a format executable by the simulator. It includes parsing HDL code for syntax correctness, linking modules, packages, and libraries, checking for semantic errors, and producing an executable model for simulation. b) Elaboration: During this phase, all static constructs are initialized, and the interconnection between parent and child modules is resolved. The design hierarchy is finalized in this step. c) Simulation Execution: This phase involves simulating the design and verifying its behavior against the requirements. All UVM components are constructed here during the UVM build_phase, and all other UVM phases are executed during this phase. d) Post-Simulation: In this phase, simulation outcomes are evaluated, debugged, and prepared for further iterations if necessary. Activities like generating coverage reports and analyzing waveforms occur here. From the above discussion, it is evident that UVM components do not exist in the first two phases of the simulator (compilation and elaboration) and only come into existence during the simulation execution phase. This is why UVM components are referred to as quasi-static; they are not created during elaboration but persist throughout the simulation once instantiated. Learn more about how we build UVM sequences from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • UVM_Components are Quasi-static in nature rather than Purely Static? There are two types of phases in UVM: phases used by the simulator and phases used by UVM environemnt to perform functional verification of the DUT. Both are distinct and have specific purposes in the verification process. The simulator consists of compilation, elaboration, simulation execution, and post-simulation. The UVM environment itself has its own set of phases, which include build phases (such as build_phase, connect_phase, end_of_elaboration_phase, and start_of_simulation_phase), the run phase, and finally the cleanup phases (extract_phase, check_phase, and report_phase). Both the simulator and UVM environment phases play crucial roles in verifying the DUT. Let us understand the purpose of each simulator phase: a) Compilation: This phase converts HDL code (design and testbench) into a format executable by the simulator. It includes parsing HDL code for syntax correctness, linking modules, packages, and libraries, checking for semantic errors, and producing an executable model for simulation. b) Elaboration: During this phase, all static constructs are initialized, and the interconnection between parent and child modules is resolved. The design hierarchy is finalized in this step. c) Simulation Execution: This phase involves simulating the design and verifying its behavior against the requirements. All UVM components are constructed here during the UVM build_phase, and all other UVM phases are executed during this phase. d) Post-Simulation: In this phase, simulation outcomes are evaluated, debugged, and prepared for further iterations if necessary. Activities like generating coverage reports and analyzing waveforms occur here. From the above discussion, it is evident that UVM components do not exist in the first two phases of the simulator (compilation and elaboration) and only come into existence during the simulation execution phase. This is why UVM components are referred to as quasi-static; they are not created during elaboration but persist throughout the simulation once instantiated. Learn more about how we build UVM sequences from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • Namaste FPGA Technologies reposted this

    Why UVM Sequences are build with UVM_OBJECT instead of UVM_COMPONENT ? A test case defines a specific operation to be tested based on a series of expected system behaviors outlined in the specification sheet. A sequence, on the other hand, focuses on generating the transactions required for building test cases. Each test case may use multiple sequences or a single sequence. For example, in the verification of a FIFO, we may have separate sequences to perform write operations and read operations. A test case might involve both sequences, first executing writes until the FIFO is full, and then executing reads until the FIFO becomes empty. A test suite consists of multiple test cases, while each test case may consist of multiple sequences. Sequences are built by extending the uvm_sequence or uvm_object class, rather than using uvm_component. Reason 1: The reason is straightforward: each test case spans only a portion of the simulation time and is not required for the entire simulation duration. Sequences need to exhibit dynamic behavior—they should come into existence only when a specific sequence is being executed, and once the execution of the test case is completed, they are no longer needed. This dynamic nature is better suited to uvm_object rather than the static nature of uvm_component. Reason 2: Additionally, sequences are executed by a sequencer (uvm_sequencer) using methods like start, wait_for_grant, and finish_item. All these methods require dynamic objects, reinforcing the need to use uvm_object to build sequences instead of uvm_component. Reason 3: Since uvm_sequences do not contribute to the component hierarchy, they can be executed in any environment with a compatible sequencer, making them highly reusable. This flexibility is a significant advantage in modular and scalable testbench design. Learn more about how we build UVM sequences from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • Why UVM Sequences are build with UVM_OBJECT instead of UVM_COMPONENT ? A test case defines a specific operation to be tested based on a series of expected system behaviors outlined in the specification sheet. A sequence, on the other hand, focuses on generating the transactions required for building test cases. Each test case may use multiple sequences or a single sequence. For example, in the verification of a FIFO, we may have separate sequences to perform write operations and read operations. A test case might involve both sequences, first executing writes until the FIFO is full, and then executing reads until the FIFO becomes empty. A test suite consists of multiple test cases, while each test case may consist of multiple sequences. Sequences are built by extending the uvm_sequence or uvm_object class, rather than using uvm_component. Reason 1: The reason is straightforward: each test case spans only a portion of the simulation time and is not required for the entire simulation duration. Sequences need to exhibit dynamic behavior—they should come into existence only when a specific sequence is being executed, and once the execution of the test case is completed, they are no longer needed. This dynamic nature is better suited to uvm_object rather than the static nature of uvm_component. Reason 2: Additionally, sequences are executed by a sequencer (uvm_sequencer) using methods like start, wait_for_grant, and finish_item. All these methods require dynamic objects, reinforcing the need to use uvm_object to build sequences instead of uvm_component. Reason 3: Since uvm_sequences do not contribute to the component hierarchy, they can be executed in any environment with a compatible sequencer, making them highly reusable. This flexibility is a significant advantage in modular and scalable testbench design. Learn more about how we build UVM sequences from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • Namaste FPGA Technologies reposted this

    UVM Function Phase vs. UVM Task Phase Simplified: In UVM, phases can be categorized into two types: those that do not consume simulation time and those that do. Non-time-consuming phases are implemented using functions since they do not require timing control constructs and execute instantaneously without consuming time. These phases are primarily designed to support the building of the verification environment by creating instances of different classes, performing connections between components, or analyzing the hierarchy. Commonly used function phases include build_phase, connect_phase, and end_of_elaboration_phase, which execute before the simulation starts. Meanwhile, function phases like final_phase and report_phase execute at the end of the simulation, focusing on cleanup or reporting results. By executing function phases either at the beginning or the end of the simulation, they are effectively separated from the time-consuming task phases. On the other hand, time-consuming phases are implemented using tasks since they require timing control constructs to handle delays, waits, and synchronization. Task phases, such as run_phase and main_phase, play a crucial role in applying stimuli to the DUT at specific times during the simulation and collecting responses from the DUT. All dynamic behavior in UVM is managed using task phases. Verification engineers must ensure that objections are raised to allow the execution of these time-consuming phases and to control their completion properly. Separating initialization, execution, and cleanup into distinct segments helps streamline the debugging process by making it easier to identify issues in specific phases. Learn how we build them from scratch here: https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • UVM Function Phase vs. UVM Task Phase Simplified: In UVM, phases can be categorized into two types: those that do not consume simulation time and those that do. Non-time-consuming phases are implemented using functions since they do not require timing control constructs and execute instantaneously without consuming time. These phases are primarily designed to support the building of the verification environment by creating instances of different classes, performing connections between components, or analyzing the hierarchy. Commonly used function phases include build_phase, connect_phase, and end_of_elaboration_phase, which execute before the simulation starts. Meanwhile, function phases like final_phase and report_phase execute at the end of the simulation, focusing on cleanup or reporting results. By executing function phases either at the beginning or the end of the simulation, they are effectively separated from the time-consuming task phases. On the other hand, time-consuming phases are implemented using tasks since they require timing control constructs to handle delays, waits, and synchronization. Task phases, such as run_phase and main_phase, play a crucial role in applying stimuli to the DUT at specific times during the simulation and collecting responses from the DUT. All dynamic behavior in UVM is managed using task phases. Verification engineers must ensure that objections are raised to allow the execution of these time-consuming phases and to control their completion properly. Separating initialization, execution, and cleanup into distinct segments helps streamline the debugging process by making it easier to identify issues in specific phases. Learn how we build them from scratch here: https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • Namaste FPGA Technologies reposted this

    What is the history space of randc that prevents the repetition of values until all possible values are generated? The randc modifier in SystemVerilog is used to generate random values in a cyclic fashion, ensuring that a value is repeated only after all possible values in the range have been generated. Unlike rand, which can produce duplicate values, randc guarantees no repetitions within a cycle. To achieve this, randc has an inbuilt mechanism to store the history of values generated so far. The history space of randc refers to the storage or tracking mechanism that ensures all possible values are generated before resetting the cycle. The size of this history space depends on the range of the variable being randomized. This mechanism introduces computational overhead and consumes heap memory. As the size of the randc variable increases, the memory requirements grow exponentially. Therefore, setting appropriate constraints is crucial to ensure that the simulation completes within a reasonable time and avoids exhausting system resources. The simulator allocates an independent history space for every instance of an object. This allows multiple instances of a transaction class to generate unique cycles of values without interfering with each other. Such independence makes randc particularly suitable for complex verification scenarios where controlled, non-overlapping randomness is required. For example, let us consider a transaction class named MyClass, which contains a single data member randc_field declared with the randc modifier. If two instances of this class are created, each instance will maintain its own independent history space. As a result, the values generated for each instance during randomization will be different and independent as observed on console. Learn building Verification environment in SV and UVM from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image
  • What is the history space of randc that prevents the repetition of values until all possible values are generated? The randc modifier in SystemVerilog is used to generate random values in a cyclic fashion, ensuring that a value is repeated only after all possible values in the range have been generated. Unlike rand, which can produce duplicate values, randc guarantees no repetitions within a cycle. To achieve this, randc has an inbuilt mechanism to store the history of values generated so far. The history space of randc refers to the storage or tracking mechanism that ensures all possible values are generated before resetting the cycle. The size of this history space depends on the range of the variable being randomized. This mechanism introduces computational overhead and consumes heap memory. As the size of the randc variable increases, the memory requirements grow exponentially. Therefore, setting appropriate constraints is crucial to ensure that the simulation completes within a reasonable time and avoids exhausting system resources. The simulator allocates an independent history space for every instance of an object. This allows multiple instances of a transaction class to generate unique cycles of values without interfering with each other. Such independence makes randc particularly suitable for complex verification scenarios where controlled, non-overlapping randomness is required. For example, let us consider a transaction class named MyClass, which contains a single data member randc_field declared with the randc modifier. If two instances of this class are created, each instance will maintain its own independent history space. As a result, the values generated for each instance during randomization will be different and independent as observed on console. Learn building Verification environment in SV and UVM from scratch here : https://lnkd.in/di5EEDRY

    • No alternative text description for this image

Similar pages

Browse jobs