Новости RSS




Весь список страниц рубрики Новости можно посмотреть здесь >>> ([-])

Altera rolls out Arria 10 software tools for 20 nm SoC FPGAs

Отправлено 4 дек. 2013 г., 07:09 пользователем Работа КА   [ обновлено 4 дек. 2013 г., 07:10 ]

Altera Corporation has just released its Quartus II software - Arria 10 edition, designed to help development of hardware and software on its new 20 nm FPGAs and SoCs.

Based on TSMC 20 nm process technology, Arria 10 FPGAs and SoCs are targeted at the midrange FPGA and SoC category with as much as a 15 percent performance gain over higher end FPGAs with power that is 40 percent lower than comparable mid range devices.
According to Premal Buch, vice president of software engineering at Altera, Quartus II software uses leading-edge algorithms that take advantage of modern multi-core computing technologies. “Architecting our software this way enables us to extend Quartus II software support to Altera’s next-generation product families,” he said.
He said that with the Arria 10 version of the Quartus II software, developers can design, simulate and compile Arria 10 FPGAs and SoCs relatively quickly. This is because the software release has been designed especially to simplify design using such complex programmable SoCs with the use of advanced timing models and final pin outs to enable board layout.
The Quartus II software Arria 10 edition supports multiple FPGA and SoC devices, including the largest density midrange Arria 10 device, featuring up to 1.15 million logic elements (LEs).
Arria 10 FPGAs and SoCs are optimized for systems that require high-performance features while being constrained by strict cost and power budgets. These midrange devices leverage an advanced 20 nm process and include features tailored to address the requirements of a variety of end markets, including communications, broadcast, and compute and storage.

[(-)]



Getting Started with Embedded Linux - Part Four

Отправлено 21 нояб. 2013 г., 06:53 пользователем Работа КА   [ обновлено 4 дек. 2013 г., 07:03 ]

We are continuing our series on how to get started with Embedded Linux if you have experience with embedded systems, but no Linux experience. You can find the first article here,  second article here. and the third here.
Linux has a large number of libraries that can be used for application development, and many of these libraries can be used with embedded systems. You may need to use your package manager to install the library and the related development package, which contains the headers. Some libraries are available in both static and dynamic versions. The static libraries are linked with your program while dynamic libraries are separate, loaded as needed when your program is executed.
Whenever I write any but the simplest application, my go-to reference for many years has been 
Advanced Programming in the UNIX Environment by W. Richard Stevens. This reference, originally published in 1992 before Linux was developed, has been updated by Stephen A. Rago, with a third edition published this year. Linux adopted most of the APIs and interfaces of Unix, although the implementations may be different. Another reference is The Linux Programming Interface by Michael Kerrisk. At over 900 and 1,500 pages respectively, neither is going to end up as bedside reading, but if you need to know the nitty-gritty details of file or process manipulation, signals, threads, inter- or intra-process or network communication and synchronization, and much more, these are great places to start. There are also significant online resources as well as help forums like Stack Overflow.
There is an extensive Open Source infrastructure supporting Linux, both in desktop and embedded environments. The 
GNU project of the Free Software Foundation maintains a large number of widely used utility programs and libraries. You can download these packages from http://gnu.mirrors.pair.com/gnu, which should automatically connect you to a mirror near you. SourceForge has source for over 300,000 projects, many very substantial. Freecode also has thousands of open-source applications. The last destination on my short list is GitHub, which provides hosting for the code repositories of thousands of projects.

Most libraries or programs are built using the GNU make utility, along with bash scripts or support utilities like automake and autoconf. At its simplest, make checks which files need to be compiled and manages (using a Makefile written by the developer) the order in which these compilations are performed. Makefiles can be quite complex, with the Makefile invoking make to build subdirectories or invoking itself recursively. Automake is designed to generate Makefiles, identifying dependencies and invoking libtool, a utility to create shared libraries. Autoconf allows libraries or programs to be compiled for different targets and operating systems or with different options. All of these are beyond the scope of this overview, but O'Reilley has excellent books about make and autotools
The standard sequence to build most of the standard libraries or program for your Linux system is to download the sources, usually in the form of a tar file, possibly compressed with gzipbzip2, or xz. If I'd like to build my own copy of diff, I would first download the diffutils package from a GNU mirror. Usually I'd use a browser to save the package, but I could also use the wget utility:



  $ cd /tmp

  $ wget ftp://ftp.gnu.org/gnu/diffutils/diffutils-3.3.tar.xz

Untar the file and cd to the resulting directory:

  $ tar xfv diffutils-3.3.tar.xz

  $ cd diffutils-3.3

 

(If the package has a .gz or .tgz suffix, you will need to add the "z" option after "xfv". If it has the .bz suffix, add the "j" option.)
Many packages have a README file. You should read this before you build. While most packages use a 
configure script generated by autoconf, not all do. The README will tell you how to build the package.

Building most packages, like the diffutils, is simple: you enter the following commands:

  $ ./configure

  $ make

  $ make install

 

The first command invokes configure that will analyze your system and create a Makefile to build your library or program tailored for your system. The second command invokes make, which will compile and link the library or program in work directory. The third command will install the library or program. Naturally, on occasion there will be errors in each of these steps. Configure may tell you that you need to install other packages, or perhaps the headers for installed libraries. The make step may stop with compile errors. The final step may try to install the library or program in a protected system directory. If this is the case, you can either run this last step as root, or prefixing this with the sudo command to temporarily assume root privileges. Alternately, you can specify the --prefix option to configure and point to an unprotected directory:

   $ ./configure --prefix=~/mydiff


When you run "make install", the program and any other files will be installed in the directory mentioned in the prefix option, in this case, mydiff in my home directory. 
With some caveats which we will discuss in the future, this means that libraries and programs which you build on your native x86 Linux environment can also be built for other processors such as ARM, PowerPC, or MIPS, or for other configurations of Linux, using the many of the same tools and techniques. 
Our next installment will talk about the Linux kernel, how it is configured, and how to build it.

[(-)]

 

Learning Linux for embedded systems

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux. 
What I always recommend to such an embedded systems programmer is this... read more


Getting started with Embedded Linux - Part Two

In the first part of this series, I outlined an approach to getting started with Embedded Linux for people with experience using non-Linux embedded systems. This starts with learning Linux in a desktop environment, running on a VMware or VirtualBox environment. One advantage that Linux has over other embedded OSes is that the same kernel is used on all systems, from smallest to largest, and many of the same utilities and libraries are available for both embedded and desktop environments. ... read more

Getting started with Embedded Linux - Part Three

Just about every project is going to require using the GNU Compiler Collection (GCC), the GNU binary utilities (Binutils), and make, used to build programs and handle dependencies. Linux does have several IDEs (integrated development environments) like Eclipse, but they're all built around these command line tools. ... read more

Getting started with Embedded Linux - Part Three

Отправлено 6 нояб. 2013 г., 08:00 пользователем Работа КА   [ обновлено 6 нояб. 2013 г., 08:02 ]

http://ru.rabota-ka.com/novosti/embeddedlinux3

We're continuing our series on how to get started with Embedded Linux if you have experience with embedded systems, but no Linux experience. You can find the previous article here and Part 1 here.

Just about every project is going to require using the GNU Compiler Collection (GCC), the GNU binary utilities (Binutils), and make, used to build programs and handle dependencies. Linux does have several IDEs (integrated development environments) like Eclipse, but they're all built around these command line tools. Unlike development on Windows, where using Visual Studio is the rule, many Linux developers do not use an IDE. 

To compile a C program using gcc, write your program using your favorite text editor (vi, emacs, gedit, kwrite, etc.) and save it with a suffix of .c (in the following example, we use the standard first C program from K&R and saved it as hello.c). Then enter the following commands: 


  $ gcc -o hello -g -O1 hello.c  


This will invoke the C compiler to translate your program, and, if this succeeds without errors, it will go on to invoke the linker with the correct system libraries to create an executable named hello. (Other operating systems identify executables by a suffix, like .exe. Linux executables generally do not have a suffix. Instead a file system flag indicates that a file can be executed.) The name of the executable follows the -o option, the -g options says to generate debugging information, and the -O1 (that's letter O followed by digit 1) indicates to generate optimized code. GCC has a large number of different options, but these are the basics. For easier debugging, you may want to specify -O0 (letter O followed by digit 0) or omit the -O option to compile without optimization. If you have more than one file which you want to compile and link together, just list the source file names one after the other. 

You might find that your Linux installation is missing some components, like GCC or the headers for the C library, which are not installed by default. If this is the case, you can use your system's package manager to add these component. On a Fedora system, this means using yum or perhaps the packagekit GUI; on an Ubuntu system, you would use apt-get or the synaptic GUI. These package managers will handle downloading and installing the component you request, as well as any dependencies that may be required. 

You can execute programs from the command line by entering the name of the program, if it is in a directory in your path list, or by giving the path to the file. For our example, we can do the following: 

  $ ./hello
   Hello world!


In this case, since our current directory is not listed in the $PATH environment variable, we use the dot (.) to indicate the current directory and then the file name, separated by a slash from the directory specification. 

This might be a good time to use GDB debugger to run your program. Whether you are doing kernel, driver, or application development, it's likely that you will need to debug your program using GDB. GDB has a command line interface and it is a good idea to learn the commands to do basic operations like printing variable values, setting breakpoints, and stepping through your program. There are several GUI's available for GDB. I use DDD, which now has a new maintainer after being dormant for a while. Other GUI's include the Eclipse CDT IDE, Insight, and even extensions to the Emacs text editor. 

You can invoke gdb

 as follows: 

	$ gdb hello
	[ start up messages from gdb ]
	(gdb) break main
	Breakpoint 1 at 0x400530: file hello.c, line 5.
	(gdb) run
	Starting program: /tmp/hello 

	Breakpoint 1, main () at hello.c:5
	5         printf ("Hello world!\n");
	(gdb) cont
	Continuing.
	Hello world!
	(gdb) quit
  

In the above example, the output from gdb  is in bold; our commands are in normal type. In addition to the initial startup messages from gdb, there may be some other messages about missing debugging information for system libraries or a message about the program exiting. 

In an Embedded Linux environment, you will be using GCC, GDB, and make in ways which are similar to native Linux development. In most cases, you will use a different build of GCC and GDB which are targeted for the processor you are using. The program names may be different, for example, arm-none-eabi-gcc, which generates code for ARM using the EABI (Embedded ABI).  Another difference is that you most likely will be working in a cross-development environment, where you compile on a host system and download your programs to a target system. If you have experience with embedded programming, working in a cross-development environment should be familiar to you. We'll talk about how this works with Embedded Linux in a future installment.

In the next installment, we'll talk about Linux applications, libraries, and the wide range of freely available software packages.  

Michael Eager 

Learning Linux for embedded systems

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux. 
What I always recommend to such an embedded systems programmer is this... read more


Getting started with Embedded Linux - Part Two

In the first part of this series, I outlined an approach to getting started with Embedded Linux for people with experience using non-Linux embedded systems. This starts with learning Linux in a desktop environment, running on a VMware or VirtualBox environment. One advantage that Linux has over other embedded OSes is that the same kernel is used on all systems, from smallest to largest, and many of the same utilities and libraries are available for both embedded and desktop environments. ... read more

DDR4 You Can Use Now

Отправлено 17 окт. 2013 г., 10:30 пользователем Работа КА   [ обновлено 17 окт. 2013 г., 10:39 ]

After seven years of development, JEDEC released the DDR4 DRAM standard (JESD79-4) last fall. The standards committee recognized the ever-increasing performance demands placed on memory and knew that a simple update wouldn't be enough.
The DDR4 architecture represents a major departure from that of previous DRAM standards, affording significant performance improvement, dramatic reductions in power demand, and compatibility with 3D architectures. Typically, a couple of years elapses between the release of a standard and broad availability of product.
Given the rapid evolution of the technology, however, DDR4 is expected to mature quite a bit more rapidly than its predecessors, with broad deployment hitting in 2014. Indeed, at the recent Intel Developers Forum (IDF), companies demonstrated working systems, like Kingston Technology's memory demo highlighting 192 GB of working 2133 MT/s DDR4 Registered DIMMs at 1.2V operating on a future Intel reference platform. We thought it was a good time to take a look at some of the offerings out on the market available to design engineers.
The following slideshow reveals that the products curently sampling go beyond memory modules to include controllers and chipsets.

SDRAM controller and PHY (Altera)

A DDR4 SDRAM interface solution provides a flexible method for designers to interface external memory with FPGAs and SoCs. The Altera PHY megafunctions and associated High-Performance Memory Controller II (HPMCII) are two distinct offerings that can be used together or individually. The PHY megafunctions provide the interface between the memory controller and the external memory devices, performing read and write operations to memory. It can be used as part of the HPMCII MegaCore function to create a complete controller and PHY solution for DDR4 SDRAM, or they can be used separately with a custom controller.

DDR4 DRAM (Micron)

Micron's 8 Gb DDR4 DRAM operates at data rates as high as 2400 MT/s. By leveraging the power saving options enabled by the DDR4 standard, the devices deliver a 40 percent reduction in power consumption and 20 percent reduction in voltage compared to DDR3 DRAM. The components also sport a JTAG boundary scan feature to enable early fault detection during testing. At the 2013 Consumer Electronics Show, Micron's consumer line Crucial announced availability of DDR4 DRAM modules, although they do not appear currently on the company's website.

PHY IP (Synopsys)

A set of mixed-signal PHY IP cores provides a physical interface compliant to the DDR4 spec, as well as to LPDDR3 and prior editions. The Synopsys DesignWare DDR4 multiPHY IP supports DDR4 SDRAM speeds up to 2400 Mbps. Each DDR4 multiPHY encompasses an application-specific SSTL I/O library, a single address/command macro block, multiple-byte-wide data macro blocks instantiated as required to accommodate the memory channel width, and separate PLL macrocells that directly abut the address/command macro block and data macro blocks. They target 28-nm nodes and below.

Enterprise-class RDIMM (Innodisk)

Innodisk a step toward the server market with the sampling of DDR4 RDIMMs. The family includes 4 GB, 8 GB, and 16 GB devices. Memory bus speeds start at 2133 MHz. With DDR4, the maximum capacity per chip has now been doubled from 64 GB to 128 GB. The higher memory densities possible with DDR4 will save space, simplify module construction, and improve internal airflow.

LRDIMM chipset (IDT)


This chipset for DDR4 RDIMMs and LRDIMMs combines IDT's 4DB0124 DDR4 data buffer and its 4RCD0124 registered clock driver (RCD) to provide complete buffering of command, address, clock, and data signals across an LRDIMM. Instantiating nine data buffers across the bottom of an LRDIMM with a single RCD in the center allows up to 16 ranks of DRAM to be reduced to a single load, minimizing stub lengths and physical skew between data bits and increasing the speed and bandwidth performance of LRDIMMs in multi-slot systems. The 4DB0124 supports advanced configuration and power interface (ACPI) states, which IDT claims reduces overall system power consumption.

DDR4 registering clock driver (Montage)

A dual-mode DDR4 registering clock driver (RCD) designed for next-generation server platforms can be used independently on an RDIMM or in conjunction with nine data buffers on an LRDIMM. With a variety of power-saving modes such as S3 low power mode, CK Stop mode, etc., the M88DDR4RCD01 from Montage Technology supports 1.2V VDD operations. It features a configurable 32-bit 1:2 registering buffer for address and control signals and I2C interface support.

DDR4 DRAM (Samsung)


Samsung is taking aim at the enterprise server market with volume release of this family of 4 Gb DDR4 DRAM. Fabricated using 20-nm process technology, the devices deliver 2,667 Mbps operation, a factor of 1.25 greater than the company's 20-nm-class DDR3, all-consuming 30 percent less power. Current packaging includes a 78-ball BGA.

DDR4 register (Inphi)

A 0.95 JEDEC-compliant DDR4 register targeted at enterprise and IP data center supports up to DDR4-2666 memory. Inphi demonstrated its iDDR4RCD-GS02 at the Intel Developers' Forum, running a test system at speeds up to 2400 MT/s while consuming less power than DDR3-1866 modules of the same capacity. The register allows system designers to customize performance and power profile across a wider range of operating frequencies as compared to DDR3.


[(-)]


iMotion 3D controller launches on Kickstarter with dreams of replacing your mouse

Отправлено 10 окт. 2013 г., 10:40 пользователем Работа КА   [ обновлено 10 окт. 2013 г., 11:01 ]

If you thought the Kinect was a brilliant step forward in 3D sensing and you were enthralled by the possibilities of hand gestures with the Leap Motion, then you might be interested in what the iMotion 3D motion controller has to offer. A small rounded rectangular device that fits onto your hand like a glove, the iMotion is composed of accelerometers, gyroscopes and three LED sensors that will communicate with any standard web cam to locate your body in 3D space. There's no special sauce to it either; as long as you have the iMotion software on your computer, you're able to use the controller with pretty much any application. However, iMotion does plan on releasing an SDK so that developers can fine-tune their app or game to enable additional features of the iMotion, such as better precision and haptic feedback.
The technology was initially developed a few years ago by Intellect Motion, a company based out of Singapore, for medical purposes like sports rehabilitation. A year ago however, it started to delve into the gaming side of things and came up with the prototype device you see above. Now the company is ready to move on to the next stage, and that's to launch the device on Kickstarter and get the iMotion out to the public.
Using the iMotion to be fairly intuitive. You can move the cursor by waving your hand around. To left-click, you can simply close your fingers to cover the top LED; you can calibrate it so that only a partial covering will suffice. There are also additional buttons on the side of the iMotion that can simulate a right-click or center button. Alex Khromenkov, one of Intellect Motion's co-founders, also demonstrated the iMotion with an open source first-person shooter called Xonotic. He showed that you can move around the space by positioning your hand closer or further away from the camera.


Khromenkov said that the final version will have adjustable Velcro straps. On the underside of the iMotion are four vibrating pads which are there to provide haptic feedback. As mentioned earlier though, that's only available if developers have incorporated the iMotion SDK into the app or game. They can set it so that the controller vibrates to let you know which direction you're getting shot at in an FPS. Khromenkov tells us that over 100 developers have already signed on for the SDK, so hopefully we'll see even more usage examples of the iMotion.
In order to get the iMotion into consumer's hands, the company has started a $100,000 Kickstarter campaign. The final retail price should be around $79. Anyone who buys a controller will get access to the aforementioned SDK, which will let devs create iMotion-compatible apps for the iOS, Windows or Linux.

[(-)]

In russian language - 3D контроллер iMotion возможно заменит мышь


Getting started with Embedded Linux - Part Two

Отправлено 7 окт. 2013 г., 09:53 пользователем Работа КА   [ обновлено 7 окт. 2013 г., 10:05 ]

In the first part of this series, I outlined an approach to getting started with Embedded Linux for people with experience using non-Linux embedded systems. This starts with learning Linux in a desktop environment, running on a VMware or VirtualBox environment. One advantage that Linux has over other embedded OSes is that the same kernel is used on all systems, from smallest to largest, and many of the same utilities and libraries are available for both embedded and desktop environments.
Teaching Linux is far beyond the scope of a short article, but we can outline a road map to becoming acquainted with Linux on the desktop and talk about how this relates to Linux in an embedded environment. There are many good books which will introduce you to Linux. The important basic principles to familiarize yourself with are the command line, file system, directory structure, and process organization.
Most Linux system configuration and management is performed from the command line. On a desktop system, this means opening a terminal window and using the Bash shell. Commands start with the command name and usually accept options (generally preceded by a hyphen) followed by file names. Many command names are terse (like ls or cp), and can take a number of options, most of which you will rarely use. If you are familiar with the Windows CMD shell (or the MSDOS shell from which it evolved), a number of the commands are similar (like cd) but there frequently are subtle differences. At a minimum, you will need to know how to list files (cat, less, file), list and move between directories (ls, cd, pwd), and how to get help (man, info, apropos). My desktop Linux system has thousands of commands. You don't need to know more than a small number, but if there is something you want to do, it's likely there is a command to do it. The apropos command is a good way to find commands which might do what you want to do. Try running "man apropos" in from the command line. You will also need to become familiar with an editor, such as vi, which can be used in a command shell.
On an Embedded Linux system, most likely you will not have a windowing system. Instead you will be interacting with BusyBox and the Ash shell, a small command line interpreter. BusyBox packages about 200 commands into a single executable program.
One of the design philosophies of Unix and Linux is its organization around a hierarchical file system. The root of this file system is named "/" and everything in the file system can be found starting here. Naturally, the file system holds regular files containing text or data, as well as programs. Additionally, it contains a variety of special "files" which may represent physical devices (like hard drives), interfaces created by drivers (such as virtual terminals), network connections, and more. Where other OSes may provide a programmatic interface to internal information about processes or memory, Linux provides a much simple interface to by representing this information as text files. The /proc directory, for example, contains a sub-directory for each currently running process which describes almost everything you might want to know about the process.
The common directories are /boot, containing the boot program; /bin and /sbin, containing programs usually run by the system administrator, root; /dev, containing devices (both real and virtual); /etc, containing system configuration files; /home, containing user files; /proc and /sys, with system information; /lib, containing libraries; /usr, containing not user files, but programs which may be run by users; /tmp, containing temporary files, and finally, /var, containing system logs. An Embedded Linux system will have the same organization, although occasionally some directories may be combined. It will have far fewer files than a desktop system.
Linux (and Unix) has a hierarchical process structure. The first process, init, has process ID (PID) one and is created by the Linux kernel when the system starts. Init, in turn, creates child processes which allow you to log into the system. These in turn start windowing systems or command shells, which in turn will spawn other processes. If you type "ps" into a command window, you will see a brief listing of the top level processes running in that window, usually just the ps command itself and the bash command line interpreter. Typing "ps -l", will give you more information, including process ID of each process's parent (PPID), the user ID (UID) of the person running the program, and more information. The "ps l" command will also print background processes. (A very few older commands inherited from Unix, like ps and tar, optionally omit the hyphen that precedes options. Unfortunately, for historical reasons, ps gives different output depending on whether you specify an option with or without the hyphen.) The "ps alx" command will give you a long list of every process running on your system, way more than you really want to know. (You might want to pipe this through less to page through the listing: "ps alx | less".) You can also look through the /proc directory to see a different view of the processes on your system.
An Embedded Linux system has exactly the same process organization as your desktop system. The differences will be that there are far fewer processes running on an embedded system than on a desktop system.
Wander around your Linux system running on the VM. Try listing files and running commands. Don't be afraid that you might break something; Linux is very robust. But you might take a snapshot of the system so that you can get back to a known stable system just in case.
Our next installment will talk about program development for Linux and Embedded Linux.


[(-)]

Michael Eager 

Learning Linux for embedded systems

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux. 
What I always recommend to such an embedded systems programmer is this... read more

Code and compilers: small, fast or what?

Отправлено 23 сент. 2013 г., 12:17 пользователем Работа КА   [ обновлено 23 сент. 2013 г., 12:20 ]

All modern compilers generate optimized code. For embedded developers, good optimization is critical, as resources are always at a premium, but control of optimization is also essential, as every embedded system is different. In this article, the balance between fast and small code and why this choice is necessary are considered. Additionally, examples are given where this rule can be broken, and fast, small code results, which lead to a reconsideration of the true function of a compiler.
What is a compiler?
Ask an average engineer and you will get an answer something like: “A software tool that translates high level language code into assembly language or machine code.” Although this definition is not incorrect, it is rather incomplete and out of date – somewhat 1970s. A better way to think of a compiler is: “A software tool that translates an algorithm described in a high level language code into a functionally identical algorithm expressed in assembly language or machine code.” More words, yes, but a more precise definition.
The implications of this definition go beyond placating a pedant like me. It leads to a greater understanding of code generation and hints at just how good a job a modern compiler can do, along with the effect on debugging the compiled code.

Life is often about compromise, but embedded developers are not good at that. Code generation is a context in which compromise is somewhat inevitable - we call it “optimization”. All modern compilers perform optimization, of course. Some do a better job than others. A lot of the time, the compiler simply guesses which optimization will produce the best result without knowing what the designer really wants. For desktop applications, this is OK. Speed is the only important criterion, as memory is effectively free. But embedded is different.
Desktop versus embedded
To the first approximation, all desktop computers are the same. It is quite straightforward to write acceptable applications that will run on anyone’s machine. Also, the broad expectations of their users are identical. But embedded systems are all different – the hardware and software environment varies widely and the expectations of users are just as diverse. In many ways, this is what is particularly interesting about embedded software development.


[(-)]

A toolbox for embedded software engineers

Отправлено 19 сент. 2013 г., 11:22 пользователем Работа КА   [ обновлено 19 сент. 2013 г., 11:28 ]

It’s fascinating that we have so many software tools available at our fingertips. As an engineer I have always tried to make full use of these tools to better organize my life and perform my tasks. Touchstone has been very open to new ideas and ways to do things better, which has been very encouraging to further refine my software toolkit. So I would like to share a few of the software from my toolkit here.

Being a circuits enthusiast, I have always wanted to have and use a personal SPICE program right on my personal laptop. Ngspice is my choice for this. This is very actively developed and maintained open-source SPICE simulator. It has recently added PSS analysis which is normally found in the commercial Spectre simulator. It has full scripting support as well as support for verilog and user defined models. It is also available as a dynamic library so you can make your own program around it. Ngspice’s website has many resources and links to programs that can be used to enter schematics and to look at simulation results.  Ngspice is platform independent, so whether you are a Linux fan or use Windows, you can use Ngspice easily.

For my board level projects I have often used KiCAD. This is another free open-source package which comes with its own schematic editor, component editor and PCB designer. I really like the PCB design tool for its usability enhancements like net names on the layout tracks! It also has 3D viewing capabilities of the populated PCB which may also be exported to a 3D CAD software like Sketchup to design the enclosure for the board. Recently KiCAD added Python scripting so you can write Macros in Python and run them. KiCAD is a cross-platform tool and works easily on Windows, Linux and Mac.

Toped is an open source layout editor I have used to view and edit layouts of ICs. It supports GDS, OASIS and CIF formats. It comes with its own scripting language, which I have not used. This software comes very handy when I want to look at a GDS file on my personal laptop, or make edits on a layout for better documentation and communication.

Maxima is an open-source computer algebra system which I have used on numerous occasions to solve algebraic equations of the circuits I design. It solves the equations in a snap which would otherwise take me hours to do and would be very error prone. It allows you to do symbolic as well as numerical Math and you can write full scripts in Lisp language to process a complex problem which I have done before. Again Maxima is platform independent and has a large and active community of users and developers.

Very often you need a Math package to process numeric data. For example taking transforms, plotting, etc. For this I have used Matlab, Scilab and Octave. Scilab is a good, extensive, free, open source tool, Octave is very handy since it uses the Matlab syntax so most Matlab scripts need very little or no modification. The nice thing about Scilab and Octave is that they are both free and open source. Although currently I am using GSL-Shell for my math processing needs. This is a relatively new tool and does not have a large community. But I like it because it is well designed and is based on theLua scripting language, which makes it versatile, extremely fast (almost as fast as a native C code). I have used GSL-Shell to process circuit simulation data and found it quite easy to pick up.

The Lua language itself is also in my software toolkit. Lua is extremely fast in execution (nearing execution speed of compiled C when you use its JIT engine), yet is extremely easy to learn and use. In spite of its simplicity it’s possible to do highly advanced programming in Lua which would make your head spin to do in C/C++. Lua runs on Android, iPhone, Windows, Linux, Mac etc. It is used in places like NASA, Google and Adobe and is highly used in the video game industry because of its speed.

All engineers need to record and later recall notes and data. Most engineers prefer a real notebook and a pen but for me it’s TiddlyWiki. Tiddlywiki is an excellent self-contained wiki ready to use out of the box. It is a HTML file that opens in a browser and edits right in there so it is totally platform independent and works on desktops and even your smart phone. The advantage of using this over pen and paper is that you can link and cross link stuff very easily. So your notes become a web of connected information, much like how it is stored in your brain. It also becomes easy to backup and carry with you wherever you go. Tiddlywiki has a big community and hundreds of plugins to do almost anything you can imagine you might want to do with such a software.

It is often required to make diagrams and flowcharts when doing documentation. Dia is an excellent open source software to do that. Its installation packages are available for Linux, Windows and Mac. It also allows scripting and automation by using Python for those of you who know Python programming language. I use Dia to create block diagrams, etc.

Inkscape is a useful open-source tool to create drawings and artwork for presentations or websites. If creates files as Scalable Vector Graphics (SVG) so they can be resized without loss of resolution. . If you are good with Microsoft Paint, then you will quickly be able to create professional quality images for your presentations with Inkscape.

Engineers are exposed to PDF documents nearly every day. PDFfill is not free or open source but it is very good and not too expensive at $20. It is good for editing text, annotations and drawings in a PDF.

Jarnal is a handwriting note application that I find very useful when I want to take notes right on the PDF instead of having to maintain the PDF and notes separately. With my tablet digitizer pen I can take notes directly on top of the PDF. Jarnal is Java software so it’s cross platform, open source and free. Jarnal also comes in handy if I want to create notes with hand drawn schematics quickly.

These days most of us have multiple phones, tablets, laptops, etc., and to keep the data synchronized between all can be a nightmare. FreeFileSync is an excellent folder synchronization program. It is really fast in comparing hundreds of thousands of files and syncing them. You can create sync profiles and just execute them to sync 2 connected devices. It is open source and cross platform.

So these were some of the software tools I use to help make my life easier and more efficient. I would love to hear about other Software tools that you think are great.

To read more blogs by Milind Gupta, go to Part 1 and Part 2. 

posted by Milind Gupta

[(-)]

AMD Plans 64-bit ARM for Communications in 2014

Отправлено 11 сент. 2013 г., 07:51 пользователем Работа КА   [ обновлено 11 сент. 2013 г., 07:59 ]

SAN FRANCISCO — Advanced Micro Devices sketched out its 2014 road map for embedded processors, the first to include an ARM-based SoC.

AMD's Hierofalcon will pack four to eight 64-bit ARM Cortex A57 CPU cores along with 10GBase-KR Ethernet and PCI Express Gen 3 links, targeting communications and storage systems. The 28nm device will go up against Intel's 22nm Rangeley, an Atom-based SoC already shipping and likely to be upgraded by the time the AMD part is ready.

Hierofalcon will come in versions spanning 15 to 30W and support ARM's Trust Zone security implemented on a Cortex A5. AMD would not say whether it will implement the Freedom Fabricacquired with startup SeaMicro that will be used in "Seattle," AMD's ARM-based server SoC shipping next year.

AMD also announced two x86-based embedded processors and its next-generation graphics core all coming in 2014 and made in a 28nm process. The graphics core, called Adelaar, will embedded 2 Gbytes of GDDR5 memory, have 76 Gbyte/s memory throughput and ship early next year.

A G-series embedded part called Steppe Eagle will use two to four enhanced Jaguar x86 cores and a Radeon 8000 GPU, and come in versions dissipating as little as 5W. A high-end R-series part called Bald Eagle will be AMD's first embedded part using the GPU/CPU coherent memory approach defined by theHeterogeneous Systems Architecture Foundation. It will use two to four enhanced Steamroller x86 cores and a Radeon 9000 GPU.

The Steppe Eagle part will fit into sockets of existing G-Series chips launched in April, offering better performance per watt, making it "easier for customers to absorb," said Nathan Brookwood, principal of Insight64 of Saratoga, Calif. "64-bit ARM in embedded is a big deal, but I am disappointed they are not providing details on the Freedom Fabric," he said.

Posted by Rick Merritt

ARM debuts an AMD embedded road map in 2014 with Hierofalcon. 
Click here to enlarge.

Learning Linux for embedded systems

Отправлено 7 сент. 2013 г., 11:00 пользователем Работа КА   [ обновлено 7 сент. 2013 г., 11:04 ]

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux.
What I always recommend to such an embedded systems programmer is this: Look at Embedded Linux as two parts, the embedded part and the Linux part. Let's consider the Linux part first.
The Linux side
Operating systems abound and the choices are many for an embedded system, both proprietary and open source. Linux is one of these choices. No matter what you use for your development host, whether Linux or Windows or Mac, you need to learn how to program using the target OS. In this respect, using Embedded Linux is not greatly different from using VXworks, WindowCE, or another OS. You need an understanding of how the OS is designed, how to configure the OS, and how to program using its application programming interface (API).
A few factors make learning how to program Linux easier than other embedded OSes. You'll find many books and tutorials about Linux, as well as Unix from which it is derived -- many more than for other OSes. Online resources for Linux are ample, while other OSes have a much smaller presence, or one driven by the OS manufacturer. Linux is open source, and you can read the code to get an understanding of exactly what the OS is doing, something that is often impossible with a proprietary OS distributed as binaries. (I certainly do not recommend reading Linux source to try to learn how to program Linux. That's like trying to learn to drive by studying how a car's transmission works.)
The most significant factor that sets Linux apart from other OSes is that the same kernel is used for all systems, from the smallest embedded boards, to desktop systems, to large server farms. This means that you can learn a large amount of Linux programming on your desktop in an environment, which is much more flexible than using a target board with all of the complexities of connecting to the target, downloading a test programming, and running the test. All of the basic concepts and most APIs are the same for your desktop Linux and your Embedded Linux.
Installing Linux
You could install a desktop Linux distribution on your development system, replacing your Windows or Mac system, but that may be a pretty large piece to bite off at one time, since you would likely need to configure email, learn new tools, and come up to speed with a different desktop interface. You could install Linux in a dual-boot environment, where you use the old environment for email, etc., and use the Linux system for learning. This can be pretty awkward, since you need to shut down one environment to bring up the other. Additionally, doing either within a corporate environment may be impractical or impossible. IT folks prefer supporting a known environment, not one that you have chosen.
An easier way is to create a virtual machine environment on your current development system. For Windows hosts, you can install VMware Player or VirtualBox, and on the Mac, you can install Parallels or VMware Fusion. Using a VM offers you much more flexibility. You can install a desktop Linux distribution, like Ubuntu or Fedora. You can use this distribution to become familiar with basic Linux concepts, learn the command shell and learn how to build and run programs. You can reconfigure the kernel or load drivers, without the concern that you'll crash your desktop system. You can build the entire kernel and application environment, similar to what you might do with a cross-development environment for an Embedded Linux target.
If your VM running Linux crashes, you simply restart the VM. The crash doesn't affect other things which you might be doing on your development system, such as reading a web page on how to build and install a driver, or that writing an email to one of the many support mailing lists.
Some of the VM products have snapshot features that allow you to take a checkpoint of a known working configuration, to which you can roll back if you can't correct a crash easily. This snapshot is far easier than trying to rescue a crashing desktop system or an unresponsive target board.
A Linux VM running on your desktop is not a perfect model for an Embedded Linux environment. The VM emulates the hardware of a desktop system, with a limited set of devices that are unlikely to match a real embedded target. But our objective at this point is not modeling a real target (something we'll discuss later) but creating an environment were you can learn Linux concepts and programming easily.
This is the first step: Create a VM and install a desktop Linux distribution on the VM. We'll pick from here in our next installment.


[(-)]

1-10 of 183

Comments