Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Python Linux Hardware

Learn Gate-Array Programming In Python and Software-Defined Radio 51

Bruce Perens writes Chris Testa KB2BMH taught a class on gate-array programming the SmartFusion chip, a Linux system and programmable gate-array on a single chip, using MyHDL, the Python Hardware Design Language to implement a software-defined radio transceiver. Watch all 4 sessions: 1, 2, 3, 4. And get the slides and code. Chris's Whitebox hardware design implementing an FCC-legal 50-1000 MHz software-defined transceiver in Open Hardware and Open Source, will be available in a few months. Here's an Overview of Whitebox and HT of the Future. Slashdot readers funded this video and videos of the entire TAPR conference. Thanks!"
This discussion has been archived. No new comments can be posted.

Learn Gate-Array Programming In Python and Software-Defined Radio

Comments Filter:
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday January 07, 2015 @08:00PM (#48760835) Homepage Journal

    HackRF is designed to be test equipment rather than a legal radio transceiver. It doesn't meet the FCC specifications for spectral purity, especially when amplified. You could probably make filters to help it produce a legal output.

    Whitebox is meant to meet FCC specifications for spurious signals that are required when amplification of 25 watts or higher is used. Amplifiers also contribute spurious signals and will usually incorporate their own filters.

    HackRF is something that sticks on your laptop via USB. Whitebox is meant to be a stand-alone system or one that is controlled from your Smartphone via a WiFi or Bluetooth link.

    Whitebox is optimized for battery power. Using a FLASH-based gate-array rather than the conventional SRAM one makes a big difference.

  • by StandardCell ( 589682 ) on Wednesday January 07, 2015 @08:07PM (#48760893)
    Folks who do development with Python should be wary of using too many procedural definitions for algorithms, even if they can be converted to hardware. Main reason is the size of the state machines and data paths, and the efficiency of algorithmic implementations in hardware, as even the best synthesis tools need to be constrained for reasons of design frequency and implementation size (hence synthesis pragmas). Granted, the hardware has gotten much more powerful and yes I know Python has object-oriented elements, but the idea of inherent concurrency and expressed versus implied data path are the trickiest things about designing hardware with languages that most people use procedurally. My other concern is supporting formal verification tools to check that Python = Verilog netlist for RTL->gate. For us more experienced hardware folks, I wish there was more emphasis on VHDL or Verilog straight-up even with open source tools.

    That said, it's great to see Chris getting this project off the ground. It'll be very helpful for the SDR community and I hope we see lots of good things come of it.
    • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday January 07, 2015 @08:18PM (#48760961) Homepage Journal

      Chris can explain this much better than I, but we are definitely conscious of the gate-array resource use. Currently we are running within the space of the least expensive SmartFusion II chip, which I think you can get for $18 in quantity. Smartfusion 1 was more of a problem as it didn't have any multiplier macrocells and we had to make those out of gates. SmartFusion II provides 11 multipliers in the lowest end chip, and thus the fixed-point multiply performance of a modern desktop chip for a lot less power.

      We are also aware of algorithmic costs. For example we were using Weaver's third method and will probably go to something else, maybe a version of Hartley.

    • Just because the language of the implementation is procedural and the language of the input specification is procedural doesn't mean the input can't be richly descriptive if all the input does is generating certain data structures describing the device model from the input. And your "object-oriented" comment seems quite out of place because it hardly brings anything new to the table that would be useful in this case compared to going the other way.

      Having said all that, I'd probably go for Lua anyway since t

      • Not sure you understand. The OO model is useful for representing a 4-input device with a logical output determined by a look-up table, which is the fundamental logical element. At least it's useful to do it elegantly. Lua is a small embedded language, but the purpose of MyHDL in this case is not to execute Python at runtime but to generate VHDL or Verilog describing an inherently parallel implementation of an algorithm.

        • From what you just wrote, I concluded that I understand. But the OO model in the Kay(ian?) definition is hardly more suited for describing hardware than not using it. Second, if whatever algorithmic code gets executed serves as a metamodel (regardless of whether the code is OO, non-OO imperative, or functional) rather then a model, then it is indeed the case that Lua is more suited since it was designed partly as a data description language since its inception, whereas Python gets just commonly mutilated an
          • If you ever write a means of describing digital logic designs in Lua we can compare it. Just describing data structures is not sufficient, you need to describe parallel boolean algebraic operations and macrocells such as multiply. At the moment no such thing exists and it would take a long time to duplicate the work of the MyHDL project.

            • Just describing data structures is not sufficient, you need to describe parallel boolean algebraic operations and macrocells such as multiply.

              You're effectively saying that a compiler must embody not only syntax but also the semantics of its input format. I never disagreed with that! It's kind of obvious, otherwise you have a mere parser. Plus, I didn't say you can't do that in Python, in fact I explicitly said that 1) it's perfectly possible to do it in Python, but 2) perhaps Lua would have been a somewhat better choice.

              I have been in fact very much interested in having a similar system in Lua, but the proprietary nature of virtually all the rel

    • by CC12123 ( 443428 )

      I would say that the main advantage of using Python is in the verification process - writing test fixtures and analyzing the results of simulations is much easier to do with the Python toolkit. Design of real world Digital Signal Processing for the FPGA feels much more natural.

      In the end, All simulations end up running in a real Verilog simulator, after conversion. I use Icarus Verilog and it integrates seamlessly at this point. You can tie in your own Verilog modules too.

      Chris KD2BMH

    • To all those who equate MyHDL with "procedural input", just because it is pure Python, please hold your horses for a minute.

      HDLs like Verilog and VHDL have both procedural and concurrent semantics. The concurrent part is very specific: fine-grained, massive, but tightly controlled through event-driven semantics. The only thing necessary to emulate that in Python are generators (functions with state), which is a pure Python concept , and an event-driven scheduler (implemented in a Simulation object).

      As a res

  • ...in just about any language,

  • Nope, Chris Testa!

    (okay, this is actually a fine example of 'news for nerds' submissions, so kudos.)

  • by Morgaine ( 4316 ) on Wednesday January 07, 2015 @11:32PM (#48762111)

    Free and Open Source Software (FOSS) has achieved immense success worldwide in virtually all areas of programming, with only one major exception where it has made no inroads: FPGAs. Every single manufacturer of these programmable devices has refused to release full device documentation which would allow FOSS tools to be written so that the devices could be configured and programmed entirely using FOSS toolchains.

    It's a very bad situation, directly analogous to not being able to write a gcc compiler backend for any CPU at all, and instead having to use a proprietary closed source binary compiler blob for each different processor. That would have been a nightmare for CPUs, but fortunately it didn't happen. Alas it has happened for FPGAs, and the nightmare is here.

    The various FPGA-based SDR projects make great play about being "open source, open hardware", but you can't create new bitstreams defining new codecs for those FPGAs using open source tools. It's a big hole in FOSS capability, and it's a source of much frustration in education and for FOSS and OSHW users of Electronic Design Automation, including radio amateurs.

    If FPGAs are going to figure strongly in amateur radio in the forthcoming years, radio amateurs who are also FOSS advocates would do well to start advocating for a few FPGA families to be opened up so that open source toolchains can be written. With sufficient pressure and well presented cases for openness, the "impossible" can sometimes happen.

    • Ever heard of SiGE and MPW/COT [mosis.com]? Who needs FPGA when you can go open source ASIC and produce an initial production run for under $50k, possibly even $10k? There's been some interesting research [google.com] from places like CalTech and Berkley in to fully designed MIMO's even with integrated antennas in an SOIC that are in many cases nearly a decade old now.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Who needs FPGA when you can go open source ASIC and produce an initial production run for under $50k

        Something about the FP in FPGA.

        Making a chip is either a huge gamble, or a huge amount of verification, usually both. I can buy an FPGA board for $30, I can reprogram it hundreds of times a day to test some code until it works. Sure formal verification is nice, so is rapid development. I use cheap FPGA boards as logic analysers, oscilloscopes, test generators, VNAs, and rather than trying to build a flash front end gui with a bunch of parameters, I just adjust the verilog or the software in the softcore to

    • Yes, we feel your pain. Indeed, it's our pain. Proprietary tools, and you get told how to load the bitstream but it's an opaque blob. We would like to work on this problem next. How far off that is I can't say, if we can establish a profitable land-mobile radio business (we don't expect to make much off of hams alone) it would help to fund such an effort.

    • by Iconoc ( 2646179 )
      Dude, You need to get out more often. I want to post a stinging response, but I don't even know where to start. I work on FPGAs on a daily basis; they are powerful devices, but I have no desire to know the most subtle details. The work of 100's of engineers goes into the development of this year's best devices. Do you propose to gather and employ their combined knowledge somehow? You'd never finish a project.

      Your template comparing FPGAs to the GCC compiler is flawed. There is a great economy of sca
      • David Rowe makes a point about echo cancellers and voice codecs, which he's written in Open Source, working alone. They were supposed to be magic. They were supposed to take big expensive research labs to make. When he actually got down to the work, he found there wasn't really magic there. Codec2 can get clear speech into 1200 Baud, and OSLEC (the echo canceler) is part of every Asterisk system and other digital telephony platforms.

        Steve Jobs also told me this when I was leaving Pixar. He didn't believe th

  • Those who are serious about learning logic design for custom integrated circuits and gate arrays should learn Verilog, or better yet SystemVerilog. Python is great and all but the logic tools are built around Verilog and VHDL and require static typing. Once one knows Verilog and VHDL, then Python, Perl, etc. can be used to dynamically abstract and stamp out repetitive stuff better than the native HDL languages.
    • I use Verilog as Verilog, and Python as SystemVerilog.

      Type checking is done at simulation time, and ultimately during synthesis. Duck typing is immensely useful for higher level abstractions.

      Chris KD2BMH

  • As a regular developer-type geek who's never done anything with radio, can somebody tell me what this does and why it is interesting? I don't want to watch an hour of video to try to figure that out.

    (Please don't take that as snark - I'm truly curious.)

Per buck you get more computing action with the small computer. -- R.W. Hamming

Working...