Learn Gate-Array Programming In Python and Software-Defined Radio 51
Bruce Perens writes Chris Testa KB2BMH taught a class on gate-array programming the SmartFusion chip, a Linux system and programmable gate-array on a single chip, using MyHDL, the Python Hardware Design Language to implement a software-defined radio transceiver. Watch all 4 sessions: 1, 2, 3, 4. And get the slides and code. Chris's Whitebox hardware design implementing an FCC-legal 50-1000 MHz software-defined transceiver in Open Hardware and Open Source, will be available in a few months. Here's an Overview of Whitebox and HT of the Future.
Slashdot readers funded this video and videos of the entire TAPR conference. Thanks!"
Re: (Score:2)
Rather, it is just the confirmation of slashdot's true darma.
How this is different from HackRF (Score:5, Informative)
HackRF is designed to be test equipment rather than a legal radio transceiver. It doesn't meet the FCC specifications for spectral purity, especially when amplified. You could probably make filters to help it produce a legal output.
Whitebox is meant to meet FCC specifications for spurious signals that are required when amplification of 25 watts or higher is used. Amplifiers also contribute spurious signals and will usually incorporate their own filters.
HackRF is something that sticks on your laptop via USB. Whitebox is meant to be a stand-alone system or one that is controlled from your Smartphone via a WiFi or Bluetooth link.
Whitebox is optimized for battery power. Using a FLASH-based gate-array rather than the conventional SRAM one makes a big difference.
Not a fan of procedural languages syntax for HDL (Score:5, Interesting)
That said, it's great to see Chris getting this project off the ground. It'll be very helpful for the SDR community and I hope we see lots of good things come of it.
Re:Not a fan of procedural languages syntax for HD (Score:5, Informative)
Chris can explain this much better than I, but we are definitely conscious of the gate-array resource use. Currently we are running within the space of the least expensive SmartFusion II chip, which I think you can get for $18 in quantity. Smartfusion 1 was more of a problem as it didn't have any multiplier macrocells and we had to make those out of gates. SmartFusion II provides 11 multipliers in the lowest end chip, and thus the fixed-point multiply performance of a modern desktop chip for a lot less power.
We are also aware of algorithmic costs. For example we were using Weaver's third method and will probably go to something else, maybe a version of Hartley.
Re: (Score:2)
Just because the language of the implementation is procedural and the language of the input specification is procedural doesn't mean the input can't be richly descriptive if all the input does is generating certain data structures describing the device model from the input. And your "object-oriented" comment seems quite out of place because it hardly brings anything new to the table that would be useful in this case compared to going the other way.
Having said all that, I'd probably go for Lua anyway since t
Re: (Score:3)
Not sure you understand. The OO model is useful for representing a 4-input device with a logical output determined by a look-up table, which is the fundamental logical element. At least it's useful to do it elegantly. Lua is a small embedded language, but the purpose of MyHDL in this case is not to execute Python at runtime but to generate VHDL or Verilog describing an inherently parallel implementation of an algorithm.
Re: (Score:3)
Re: (Score:3)
If you ever write a means of describing digital logic designs in Lua we can compare it. Just describing data structures is not sufficient, you need to describe parallel boolean algebraic operations and macrocells such as multiply. At the moment no such thing exists and it would take a long time to duplicate the work of the MyHDL project.
Re: (Score:2)
Just describing data structures is not sufficient, you need to describe parallel boolean algebraic operations and macrocells such as multiply.
You're effectively saying that a compiler must embody not only syntax but also the semantics of its input format. I never disagreed with that! It's kind of obvious, otherwise you have a mere parser. Plus, I didn't say you can't do that in Python, in fact I explicitly said that 1) it's perfectly possible to do it in Python, but 2) perhaps Lua would have been a somewhat better choice.
I have been in fact very much interested in having a similar system in Lua, but the proprietary nature of virtually all the rel
Re: (Score:3)
Chris and I would like to do an Open gate array as our next project. Sufficient patents have expired, etc.
Re: (Score:2)
Re: (Score:2)
I would say that the main advantage of using Python is in the verification process - writing test fixtures and analyzing the results of simulations is much easier to do with the Python toolkit. Design of real world Digital Signal Processing for the FPGA feels much more natural.
In the end, All simulations end up running in a real Verilog simulator, after conversion. I use Icarus Verilog and it integrates seamlessly at this point. You can tie in your own Verilog modules too.
Chris KD2BMH
Re: (Score:1)
To all those who equate MyHDL with "procedural input", just because it is pure Python, please hold your horses for a minute.
HDLs like Verilog and VHDL have both procedural and concurrent semantics. The concurrent part is very specific: fine-grained, massive, but tightly controlled through event-driven semantics. The only thing necessary to emulate that in Python are generators (functions with state), which is a pure Python concept , and an event-driven scheduler (implemented in a Simulation object).
As a res
Re: (Score:1)
Hey Jan, thanks for making MyHDL :D
The basics can be implemented... (Score:1)
...in just about any language,
News for nerds? (Score:2)
Nope, Chris Testa!
(okay, this is actually a fine example of 'news for nerds' submissions, so kudos.)
FOSS and ham radio need fully open FPGAs (Score:5, Interesting)
Free and Open Source Software (FOSS) has achieved immense success worldwide in virtually all areas of programming, with only one major exception where it has made no inroads: FPGAs. Every single manufacturer of these programmable devices has refused to release full device documentation which would allow FOSS tools to be written so that the devices could be configured and programmed entirely using FOSS toolchains.
It's a very bad situation, directly analogous to not being able to write a gcc compiler backend for any CPU at all, and instead having to use a proprietary closed source binary compiler blob for each different processor. That would have been a nightmare for CPUs, but fortunately it didn't happen. Alas it has happened for FPGAs, and the nightmare is here.
The various FPGA-based SDR projects make great play about being "open source, open hardware", but you can't create new bitstreams defining new codecs for those FPGAs using open source tools. It's a big hole in FOSS capability, and it's a source of much frustration in education and for FOSS and OSHW users of Electronic Design Automation, including radio amateurs.
If FPGAs are going to figure strongly in amateur radio in the forthcoming years, radio amateurs who are also FOSS advocates would do well to start advocating for a few FPGA families to be opened up so that open source toolchains can be written. With sufficient pressure and well presented cases for openness, the "impossible" can sometimes happen.
Re: (Score:2)
An Open gate-array is one of those "if you build it, they will come" sort of things. Chinese fabs would compete with each other to drive the price down. It would become the standard low-end part and gate-array manufacturers would have to compete on high-end only.
So I am really interested in doing it, and so is Chris. We just can't ignore our current business in order to do it.
Re: (Score:2)
I don't think there's an open source FPGA design can bring much margin for improvement in pricing.
Keep in mind is that the FPGA manufacturers already have a type of competition: beyond a certain point, it's cheaper to roll out your own ASIC.
This already constrains the volume and price on their low end offerings.
You're also being quite naive about competition between "cheap" manufacturers, when applied to semiconductors.
First challenge is that lowest cost/transistor is provided by relatively modern processe
Re: (Score:2)
There's a partial list of fabs at Wikipedia [wikipedia.org]. There are more than just those three.
Sure, process optimization per fab is an issue. We would probably need to start on the very conservative side.
A lot of the time, building a custom ASIC rather than using an FPGA just isn't an option. Most of the products I'm concerned with need to be programmable.
Re: (Score:2)
1. Cut out those which don't have best $/transistor process nodes (~32 nm -- ~14 nm depending on who you ask).
2. Cut out the memory fabs, their processes aren't suitable for general purpose logic.
3. Dig in and notice cases like CNSE which is actually GF.
The remaining list is those 3 plus Intel.
Optimization per fab is a bloody understatement. To have something that is even close to competitive in performance/power/area you need to a) custom design your gate array to each process b) characterize the resultin
Re: (Score:2)
I think you are missing the application for an Open gate array.
It is not really for you and your company. You don't have any particular interest in the open part, and thus you and your company don't fit the demographic of the sort of user we would want. We don't need your money. I can do the first runs of this using Mosis and its ilk for chump change, and go from there.
It simply doesn't matter if it's 32 nm or 15 nm or 50 nm. What matters is that the user can completely understand the bitstream and produce
Re: (Score:2)
I understand the motivation. I'm just not discussing the parts where I agree with you. :)
I just took issue with your original statement where you envisioned the open gate array dominating the low end market based on price.
Re: (Score:2)
Hopefully we'll find out eventually.
Re: (Score:2)
Yes. EE education and academic research.
There is also the security problem. How can you determine from first principles that the chip really contains what it says it does? Insoluble with any commercial component. Maybe we could make ours sufficiently visible.
So, my feeling is that we could get a grant for this.
Re: (Score:2)
The other issue is that commercially available FPGAs have limited market lives. You could easily spend years developing an open source tool chain for a part that is available only on eBay as a "refurbished after removed from equipment".
It's not a totally of a different problem than faced by developers of open source compilers and graphic drivers: any given model is only in the market for a short while, much less than FPGAs.
And while CPUs are usually* replaced by 100% backward compatible models, new GPUs aren't usually backward compatible.
* Except when the entire instruction set architecture dies away.
The trick is, of course, to reuse as much of the code as possible to support different architectures.
Same thing is appliable to FPGAs and AS
Re: (Score:2)
Ever heard of SiGE and MPW/COT [mosis.com]? Who needs FPGA when you can go open source ASIC and produce an initial production run for under $50k, possibly even $10k? There's been some interesting research [google.com] from places like CalTech and Berkley in to fully designed MIMO's even with integrated antennas in an SOIC that are in many cases nearly a decade old now.
Re: (Score:2, Informative)
Who needs FPGA when you can go open source ASIC and produce an initial production run for under $50k
Something about the FP in FPGA.
Making a chip is either a huge gamble, or a huge amount of verification, usually both. I can buy an FPGA board for $30, I can reprogram it hundreds of times a day to test some code until it works. Sure formal verification is nice, so is rapid development. I use cheap FPGA boards as logic analysers, oscilloscopes, test generators, VNAs, and rather than trying to build a flash front end gui with a bunch of parameters, I just adjust the verilog or the software in the softcore to
Re: (Score:3)
Yes, we feel your pain. Indeed, it's our pain. Proprietary tools, and you get told how to load the bitstream but it's an opaque blob. We would like to work on this problem next. How far off that is I can't say, if we can establish a profitable land-mobile radio business (we don't expect to make much off of hams alone) it would help to fund such an effort.
Re: (Score:2)
Yes. And looking at the way things have gone previously in JEDEC, we would have to be very aware of manufacturers desire to embed their patents in standards.
Re: (Score:1)
Your template comparing FPGAs to the GCC compiler is flawed. There is a great economy of sca
Re: (Score:3)
David Rowe makes a point about echo cancellers and voice codecs, which he's written in Open Source, working alone. They were supposed to be magic. They were supposed to take big expensive research labs to make. When he actually got down to the work, he found there wasn't really magic there. Codec2 can get clear speech into 1200 Baud, and OSLEC (the echo canceler) is part of every Asterisk system and other digital telephony platforms.
Steve Jobs also told me this when I was leaving Pixar. He didn't believe th
learn verilog (Score:1)
Re: learn verilog (Score:1)
I use Verilog as Verilog, and Python as SystemVerilog.
Type checking is done at simulation time, and ultimately during synthesis. Duck typing is immensely useful for higher level abstractions.
Chris KD2BMH
Re: VHDL is pretty easy (Score:1)
I agree that the essence of RTL isn't too difficult; the complexity arises when you use RTL to do filtering and modulation. Even the vendor tools are lacking when it comes to analyzing Digital Signal Processing. You have to use Simulink, and that's not a cheap proposition.
Chris KD2BMH
Re: VHDL is pretty easy (Score:1)
You do if you'd like to directly convert your model, which works so well at the high level, into C or RTL. Yes you can rewrite your Octave script in Verilog, but part of what makes MyHDL exciting is that this extra work is done for you.
What is this for? (Score:1)
As a regular developer-type geek who's never done anything with radio, can somebody tell me what this does and why it is interesting? I don't want to watch an hour of video to try to figure that out.
(Please don't take that as snark - I'm truly curious.)