Documentation
Software Tools
Hardware Tools
Software Implementations
Hardware Implementations
riscv-tools 
Three guides are available for this repo:
Quickstart
$ git submodule update --init --recursive
$ export RISCV=/path/to/install/riscv/toolchain
$ ./build.sh
Ubuntu packages needed:
$ sudo apt-get install autoconf automake autotools-dev curl libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf
Note: This requires GCC >= 4.8 for C++11 support (including thread_local). To use a compiler different than the default (for example on OS X), use:
$ CC=gcc-4.8 CXX=g++-4.8 ./build.sh
The RISC-V GCC/Newlib Toolchain Installation Manual
This document was authored by Quan Nguyen and is a mirrored version (with slight modifications) of the one found at Quan's OCF website. Recent updates were made by Sagar Karandikar.
Last updated August 6, 2014
Introduction
The purpose of this page is to document a procedure through which an interested user can build the RISC-V GCC/Newlib toolchain.
A project with a duration such as this requires adequate documentation to support future development and maintenance. This document is created with the hope of being useful; however, its accuracy is not guaranteed.
This work was completed at Andrew and Yunsup's request.
Table of Contents
- Introduction
- Table of Contents
- Meta-installation Notes
- Installing the Toolchain
- Testing Your Toolchain
- "Help! It doesn't work!"
Meta-installation Notes
You may notice this document strikes you as similar to its bigger sibling, the Linux/RISC-V Installation Manual. That's because the instructions are rather similar. That said...
Running Shell Commands
Instructive text will appear as this paragraph does. Any instruction to execute in your terminal will look like this:
$ echo "execute this"
Optional shell commands that may be required for your particular system will have their prompt preceeded with an O:
O$ echo "call this, maybe"
If you will need to replace a bit of code that applies specifically to your situation, it will be surrounded by [square brackets].
The Standard Build Unit
To instruct how long it will take someone to build the various components of the packages on this page, I have provided build times in terms of the Standard Build Unit (SBU), as coined by Gerard Beekmans in his immensely useful Linux From Scratch website.
On an Intel Xeon Dual Quad-core server with 48 GiB RAM, I
achieved the following build time for binutils
: 38.64 seconds.
Thus, 38.64 seconds = 1 SBU. (EECS members at the University
of California, Berkeley: I used the s141.millennium
server.)
As a point of reference, my 2007 MacBook with an Intel Core 2
Duo and 1 GiB RAM has 100.1 seconds to each SBU. Building
riscv-linux-gcc
, unsurprisingly, took about an hour.
Items marked as "optional" are not measured.
Having Superuser Permissions
You will need root privileges to install
the tools to directories like /usr/bin
, but you may optionally
specify a different installation directory. Otherwise, superuser privileges are
not necessary.
GCC Version
Note: Building riscv-tools
requires GCC >= 4.8 for C++11 support (including thread_local). To use a compiler different than the default (for example on OS X), you'll need to do the following when the guide requires you to run build.sh
:
$ CC=gcc-4.8 CXX=g++-4.8 ./build.sh
Installing the Toolchain
Let's start with the directory in which we will install our
tools. Find a nice, big expanse of hard drive space, and let's call that
$TOP
. Change to the directory you want to install in, and then set
the $TOP
environment variable accordingly:
$ export TOP=$(pwd)
For the sake of example, my $TOP
directory is on
s141.millennium
, at /scratch/quannguyen/noob
, named so
because I believe even a newbie at the command prompt should be able to complete
this tutorial. Here's to you, n00bs!
Tour of the Sources
If we are starting from a relatively fresh install of GNU/Linux, it will be necessary to install the RISC-V toolchain. The toolchain consists of the following components:
riscv-gcc
, a RISC-V cross-compilerriscv-fesvr
, a "front-end" server that services calls between the host and target processors on the Host-Target InterFace (HTIF) (it also provides a virtualized console and disk device)riscv-isa-sim
, the ISA simulator and "golden standard" of executionriscv-opcodes
, the enumeration of all RISC-V opcodes executable by the simulatorriscv-pk
, a proxy kernel that services system calls generated by code built and linked with the RISC-V Newlib port (this does not apply to Linux, as it handles the system calls)riscv-tests
, a set of assembly tests and benchmarks
In the installation guide for Linux builds, we built only the
simulator and the front-end server. Binaries built against Newlib with
riscv-gcc
will not have the luxury of being run on a full-blown
operating system, but they will still demand to have access to some crucial
system calls.
What's Newlib?
Newlib is a "C library intended for use on embedded systems." It has the advantage of not having so much cruft as Glibc at the obvious cost of incomplete support (and idiosyncratic behavior) in the fringes. The porting process is much less complex than that of Glibc because you only have to fill in a few stubs of glue code.
These stubs of code include the system calls that are
supposed to call into the operating system you're running on. Because there's no
operating system proper, the simulator runs, on top of it, a proxy kernel
(riscv-pk
) to handle many system calls, like open
,
close
, and printf
.
Obtaining and Compiling the Sources (7.87 SBU)
First, clone the tools from the riscv-tools
GitHub
repository:
$ git clone https://github.com/ucb-bar/riscv-tools.git
This command will bring in only references to the repositories that we will need. We rely on Git's submodule system to take care of resolving the references. Enter the newly-created riscv-tools directory and instruct Git to update its submodules.
$ cd $TOP/riscv-tools
$ git submodule update --init --recursive
To build GCC, we will need several other packages, including flex, bison, autotools, libmpc, libmpfr, and libgmp. Ubuntu distribution installations will require this command to be run. If you have not installed these things yet, then run this:
O$ sudo apt-get install autoconf automake autotools-dev curl libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf
Before we start installation, we need to set the
$RISCV
environment variable. The variable is used throughout the
build script process to identify where to install the new tools. (This value is
used as the argument to the --prefix
configuration switch.)
$ export RISCV=$TOP/riscv
If your $PATH
variable does not contain the
directory specified by $RISCV
, add it to the $PATH
environment variable now:
$ export PATH=$PATH:$RISCV/bin
One more thing: If your machine doesn't have the capacity to
handle 16 make jobs (or conversely, it can handle more), edit
build.common
to change the number specified by
JOBS
.
O$ sed -i 's/JOBS=16/JOBS=[number]/' build.common
With everything else set up, run the build script. Recall that if you're using a new-version of gcc that isn't the default on your system, you'll need to precede the ./build.sh
with CC=gcc-4.8 CXX=g++-4.8
:
$ ./build.sh
Testing Your Toolchain
Now that you have a toolchain, it'd be a good idea to test it
on the quintessential "Hello world!" program. Exit the riscv-tools
directory and write your "Hello world!" program. I'll use a long-winded
echo
command.
$ cd $TOP
$ echo -e '#include <stdio.h>\n int main(void) { printf("Hello world!\\n"); return 0; }' > hello.c
Then, build your program with riscv-gcc
.
$ riscv-gcc -o hello hello.c
When you're done, you may think to do ./hello
,
but not so fast. We can't even run spike hello
, because our "Hello
world!" program involves a system call, which couldn't be handled by our host
x86 system. We'll have to run the program within the
proxy kernel, which itself is run by spike
, the RISC-V
architectural simulator. Run this command to run your "Hello world!"
program:
$ spike pk hello
The RISC-V architectural simulator, spike
, takes
as its argument the path of the binary to run. This binary is pk
,
and is located at $RISCV/riscv-elf/bin/pk
.
spike
finds this automatically.
Then, riscv-pk
receives as its
argument the name of the program you want to run.
Hopefully, if all's gone well, you'll have your program saying, "Hello world!". If not...
"Help! It doesn't work!"
I know, I've been there too. Good luck!
The Linux/RISC-V Installation Manual
Introduction
The purpose of this page is to document a procedure through which an interested user can install an executable image of the RISC-V architectural port of the Linux kernel.
A project with a duration such as this requires adequate documentation to support future development and maintenance. This document is created with the hope of being useful; however, its accuracy is not guaranteed.
This document is a mirrored version (with slight modifications) of the one found at Quan's OCF website
Table of Contents
- Introduction
- Table of Contents
- Meta-installation Notes
- Installing the Toolchain
- Building the Linux Kernel
- Building BusyBox
- Creating a Root Disk Image
- "Help! It doesn't work!"
- Optional Commands
- Appendices
- Building the Native Compiler
- References
Meta-installation Notes
Running Shell Commands
Instructive text will appear as this paragraph does. Any instruction to execute in your terminal will look like this:
$ echo "execute this"
Optional shell commands that may be required for your particular system will have their prompt preceeded with an O:
O$ echo "call this, maybe"
When booted into the Linux/RISC-V kernel, and some command is to be
run, it will appear as a root prompt (with a #
as the prompt):
# echo "run this in linux"
If you will need to replace a bit of code that applies specifically to your situation, it will be surrounded by [square brackets].
The Standard Build Unit
To instruct how long it will take someone to build the various components of the packages on this page, I have provided build times in terms of the Standard Build Unit (SBU), as coined by Gerard Beekmans in his immensely useful Linux from Scratch website.
On an Intel Xeon Dual Quad-core server with 48 GiB RAM, I
achieved the following build time for binutils
: 38.64 seconds.
Thus, 38.64 seconds = 1 SBU. (EECS members at the University
of California, Berkeley: I used the s141.millennium
server.)
As a point of reference, my 2007 MacBook with an Intel Core 2
Duo and 1 GiB RAM has 100.1 seconds to each SBU. Building
riscv-linux-gcc
, unsurprisingly, took about an hour.
Items marked as "optional" are not measured.
Having Superuser Permissions
You will need root privileges to install
the tools to directories like /usr/bin
, but you may optionally
specify a different installation directory. Otherwise, superuser privileges are
not necessary.
Installing the Toolchain (11.81 + ε SBU)
Let's start with the directory in which we will install our
tools. Find a nice, big expanse of hard drive space, and let's call that
$TOP
. Change to the directory you want to install in, and then set
the $TOP
environment variable accordingly:
$ export TOP=$(pwd)
For the sake of example, my $TOP
directory is on
s141.millennium
, at /scratch/quannguyen/noob
, named so
because I believe even a newbie at the command prompt should be able to boot
Linux using this tutorial. Here's to you, n00bs!
Installing the RISC-V simulator (0.40 SBU)
If we are starting from a relatively fresh install of GNU/Linux, it will be necessary to install the RISC-V toolchain. The toolchain consists of the following components:
riscv-gcc
, a RISC-V cross-compilerriscv-fesvr
, a "front-end" server that services calls between the host and target processors on the Host-Target InterFace (HTIF) (it also provides a virtualized console and disk device)riscv-isa-sim
, the ISA simulator and "golden standard" of executionriscv-opcodes
, the enumeration of all RISC-V opcodes executable by the simulatorriscv-pk
, a proxy kernel that services system calls generated by code built and linked with the RISC-V Newlib port (this does not apply to Linux, as it handles the system calls)riscv-tests
, a set of assembly tests and benchmarks
In actuality, of this list, we will need to build only
riscv-fesvr
and riscv-isa-sim
. These are the two
components needed to simulate RISC-V binaries on the host machine. We will also need to
build riscv-linux-gcc
, but this involves a little modification of
the build procedure for riscv-gcc
.
First, clone the tools from the ucb-bar
GitHub
repository:
$ git clone https://github.com/ucb-bar/riscv-tools.git
This command will bring in only references to the repositories that we will need. We rely on Git's submodule system to take care of resolving the references. Enter the newly-created riscv-tools directory and instruct Git to update its submodules.
$ cd $TOP/riscv-tools
$ git submodule update --init
To build GCC, we will need several other packages, including flex, bison, autotools, libmpc, libmpfr, and libgmp. Ubuntu distribution installations will require this command to be run. If you have not installed these things yet, then run this:
O$ sudo apt-get install autoconf automake autotools-dev curl libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf
Before we start installation, we need to set the
$RISCV
environment variable. The variable is used throughout the
build script process to identify where to install the new tools. (This value is
used as the argument to the --prefix
configuration switch.)
$ export RISCV=$TOP/riscv
If your $PATH
variable does not contain the
directory specified by $RISCV
, add it to the $PATH
environment variable now:
$ export PATH=$PATH:$RISCV/bin
One more thing: If your machine doesn't have the capacity to
handle 16 make jobs (or conversely, it can handle more), edit
build.common
to change the number specified by
JOBS
.
O$ sed -i 's/JOBS=16/JOBS=[number]/' build.common
Since we only need to build a few tools, we will use a
modified build script, listed in its entirety below. Remember that we'll build
riscv-linux-gcc
shortly afterwards. If you want to build the full
toolchain for later use, see here.
[basic-build.sh contents]
1 #!/bin/bash
2 . build.common
3 build_project riscv-fesvr --prefix=$RISCV
4 build_project riscv-isa-sim --prefix=$RISCV --with-fesvr=$RISCV
Download this script using this command:
$ curl -L http://riscv.org/install-guides/linux-build.sh > basic-build.sh
(The -L
option allows curl to handle redirects.)
Make the script executable, and with everything else taken care of, run the
build script.
$ chmod +x basic-build.sh
$ ./basic-build.sh
Building riscv-linux-gcc
(11.41 SBU)
riscv-linux-gcc
is the name of the
cross-compiler used to build binaries linked to the GNU C Library
(glibc
) instead of the Newlib library. You can build Linux with
riscv-gcc
, but you will need riscv-linux-gcc
to
cross-compile applications, so we will build that instead.
The SYSROOT
Concept
When installing these toolchains, the make
system often generates a wide variety of libraries and other files. In
particular, building Glibc involves building the run-time dynamic linker and the
C standard library (ld.so.1
and libc.so.6
, in this
case). These, together with header files like stdio.h
, comprise the
system root, an often-necessary set of files for a fully operational
system.
When we built riscv-tools
, there was no need
for specifying where to install these files, because we assumed we would always
be running on the host machine through a simulator; all of the libraries are on
the host system. Now that we're running our binaries from within an
operating system, we will have to provide these libraries and headers if we want
to run dynamically-linked binaries and compile programs natively.
We now must instruct the riscv-linux-gcc
build
process to place our system root files in a place we can get to them. We call
this directory SYSROOT
. Let's set our $SYSROOT
environment variable for easy access throughout the build process. Ensure that
this directory is not inside our $RISCV
variable.
$ cd $TOP
$ mkdir sysroot
$ export SYSROOT=$TOP/sysroot
The Linux Headers
In an apparent case of circular dependence (but not
really), we have to give the riscv-linux-gcc
the location
of the Linux headers. The Linux headers provide the details that Glibc needs to
function properly (a prominent example is include/asm/bitsperlong.h
,
which sets Glibc to be 64-bit or 32-bit). There's a copy of the headers in the
riscv-gcc
repository, so make a usr/
directory in
$SYSROOT
and copy the contents of
riscv-gcc/linux-headers
into the newly created directory.
$ mkdir $SYSROOT/usr
$ cp -r $TOP/riscv-tools/riscv-gcc/linux-headers/* $SYSROOT/usr
(In the event that the kernel headers
(anything inside arch/riscv/include/asm/
or in
include/
are changed, you can use the Linux kernel Makefile to
generate a new set of Linux headers - see here.)
Enter the riscv-gcc
directory within the
riscv-tools
repository, and patch up Makefile.in
with
this patch: sysroot-Makefile.in.patch. This patch adjusts the build system to accept the
--with-sysroot
configuration flag for the relevant make targets.
(Credit to a_ou for the file from which I made this patch.) Use this line to patch it up:
$ cd $TOP/riscv-tools/riscv-gcc
$ curl -L http://riscv.org/install-guides/sysroot-Makefile.in.patch | patch -p1
When that's done, run the configure script to generate the Makefile.
$ ./configure
These instructions will place your
riscv-linux-gcc
tools in the same installation directory as the
riscv-gcc
tool installed earlier. This arrangement is the simplest,
but if you would like to place them in a different directory, see here.
Run this command to start the build process:
$ make linux INSTALL_DIR=$RISCV SYSROOT=$SYSROOT
Take note that we supply both the variables
INSTALL_DIR
and SYSROOT
. Even though we are diverting
some files to the $SYSROOT
directory, we still have to place the
cross-compiler somewhere. That's where INSTALL_DIR
comes into
play.
When we originally built riscv-gcc
, we built it
using the "newlib" makefile target (in
riscv-tools/riscv-gcc/Makefile
). This "linux" target builds
riscv-linux-gcc
with glibc (and the Linux kernel headers). Because
we now have to build glibc, it will take much more time. If you don't have the
power of a 16 core machine with you, maybe it's time to get a cup of coffee.
Building the Linux Kernel (0.40 + ε SBU)
Obtaining and Patching the Kernel Sources
We are finally poised to bring in the Linux kernel sources.
Change out of the riscv-tools/riscv-gcc
directory and clone the
riscv-linux
Git repository into this directory:
linux-3.14._xx_
, where xx represents the current
minor revision (which, as early September 2014, is "19").
$ cd $TOP
$ git clone git@github.com:ucb-bar/riscv-linux.git linux-3.14.19
Download the current minor revision of the 3.14 Linux kernel series
from The Linux Kernel Archives, and in one fell
swoop, untar them over our repository. (The -k
switch ensures that
our .gitignore
and README
files don't get clobbered.)
$ curl -L ftp://ftp.kernel.org/pub/linux/kernel/v3.x/linux-3.14.19.tar.xz | tar -xJk
Configuring the Linux Kernel
The Linux kernel is seemingly infinitely configurable. However, with the current development status, there aren't that many devices or options to tweak. However, start with a default configuration that should work out-of-the-box with the ISA simulator.
$ make ARCH=riscv defconfig
If you want to edit the configuration, you can use a text-based GUI (ncurses) to edit the configuration:
O$ make ARCH=riscv menuconfig
Among other things, we have enabled by default procfs, ext2, and the HTIF virtualized devices (a block driver and console). In development, it can be very useful to enable "early printk", which will print messages to the console if the kernel crashes very early. You can access this option at "Early printk" in the "Kernel hacking" submenu.
Linux kernel menuconfig interface.
Begin building the kernel once you're satisfied with your
configuration. Note the pattern: to build the RISC-V kernel, you must
specify the ARCH=riscv
in each invocation of make
.
This line is no exception. If you want to speed up the process, you can pass the
-j [number]
option to make.
$ make -j ARCH=riscv
Congratulations! You've just cross-compiled the Linux kernel for RISC-V! However, there are a few more things to take care of before we boot it.
Building BusyBox (0.26 SBU)
We currently develop with BusyBox, an unbelievably useful set of utilities that all compile into one multi-use binary. We use BusyBox without source code modifications. You can obtain the source at http://www.busybox.net. In our case, we will use BusyBox 1.21.1, but other versions should work fine.
Currently, we need it for its init
and
ash
applets, but with bash
cross-compiled for RISC-V,
there is no longer a need for ash
.
First, obtain and untar the source:
$ cd $TOP
$ curl -L http://busybox.net/downloads/busybox-1.21.1.tar.bz2 | tar -xj
Then, enter the directory and turn off every configuration option:
$ cd busybox-1.21.1
$ make allnoconfig
We will need to change the cross-compiler, set the build to
"static" (if desired, you can make it dynamic, but you'll have to copy some
libraries later). We will also enable the init
, ash
,
and mount
applets. Also, disable job control for ash
when the drop down menu for ash
's suboptions appear.
Here are the configurations you will have to change:
CONFIG_STATIC=y
, listed as "Build BusyBox as a static binary (no shared libs)" in BusyBox Settings → Build OptionsCONFIG_CROSS_COMPILER_PREFIX=riscv-linux-
, listed as "Cross Compiler prefix" in BusyBox Settings → Build OptionsCONFIG_FEATURE_INSTALLER=y
, listed as "Support --install [-s] to install applet links at runtime" in BusyBox Settings → General ConfigurationCONFIG_INIT=y
, listed as "init" in Init utilitiesCONFIG_ASH=y
, listed as "ash" in ShellsCONFIG_ASH_JOB_CONTROL=n
, listed as "Ash → Job control" in ShellsCONFIG_MOUNT=y
, listed as "mount" in Linux System Utilities
My configuration file used to create this example is located here: busybox-riscv.config. You can also download it directly using this snippet of code:
$ curl -L http://riscv.org/install-guides/busybox-riscv.config > .config
Whether or not you want to use the file provided, enter the configuration interface much in the same way as that of the Linux kernel:
O$ make menuconfig
BusyBox menuconfig interface. Looks familiar, eh?
Once you've finished, make BusyBox. You don't need to specify
$ARCH
, because we've passed the name of the cross-compiler prefix.
$ make -j
Once that completes, you now have a BusyBox binary cross-compiled to run on RISC-V. Now we'll need a way for the kernel to access the binary, and we'll use a root disk image for that. Before we proceed, change back into the directory with the Linux sources.
$ cd $TOP/linux-3.14.19
Creating a Root Disk Image
When we initially developed the kernel, we used an initramfs
to store our binaries (BusyBox in
particular). However, with our HTIF-enabled block device, we can boot off of a
root file system proper. (In fact, we still make use of the initramfs, but only
to set up devices and the symlink to init
. See
arch/riscv/initramfs.txt
.)
Currently, we have a root file system pre-packaged specifically for the RISC-V release. You can obtain it by heading to the index of my website, http://ocf.berkeley.edu/~qmn, finding my email, and contacting me.
To create your own root image, we need to create an ext2 disk
image. To create an empty disk image, use dd
, setting the argument
to count
to the size, in MiB, of your disk image. 64 MiB seems to
be good enough for our purposes.
$ dd if=/dev/zero of=root.bin bs=1M count=64
The file root.bin
is just an empty chunk of
zeros and has no partitioning information. To format it as an ext2 disk, run
mkfs.ext2
on it:
$ mkfs.ext2 -F root.bin
You can modify this filesystem if you mount it as writable
from within Linux/RISC-V. However, a better option, especially if you want to
copy big binaries, is to mount it on your host machine. You will normally
need superuser privileges to do a mount. Do so this way, assuming you want
to mount the disk image at linux-3.14.19/mnt
:
$ mkdir mnt
$ sudo mount -o loop root.bin mnt
(Instructions for mounting provided courtesy of a_ou.)
If you cannot mount as root, you can use Filesystem in Userspace (FUSE) instead. See here.
Once you've mounted the disk image, you can edit the files inside. There are a few directories that you should have:
/bin
/dev
/etc
/lib
/proc
/sbin
/tmp
/usr
So create them:
$ cd mnt
$ mkdir -p bin etc dev lib proc sbin tmp usr usr/bin usr/lib usr/sbin
Then, place the BusyBox executable we just compiled in
/bin
.
$ cp $TOP/busybox-1.21.1/busybox bin
If you have built BusyBox statically, that will be all that's needed. If you want to build BusyBox dynamically, you will need to follow a slightly different procedure, described here.
We will also need to prepare an initialization table in the
aptly-named file inittab
, placed in /etc
. Here is the
inittab
from our disk image:
1 ::sysinit:/bin/busybox mount -t proc proc /proc
2 ::sysinit:/bin/busybox mount -t tmpfs tmpfs /tmp
3 ::sysinit:/bin/busybox mount -o remount,rw /dev/htifbd0 /
4 ::sysinit:/bin/busybox --install -s
5 /dev/console::sysinit:-/bin/ash
Line 1 mounts the procfs filesystem onto /proc
.
Line 2 does similarly for tmpfs. Line 3 mounts the HTIF-virtualized block
device (htifbd
) onto root. Line 4 installs the various BusyBox
applet symbolic links in /bin
and elsewhere to make it more
convenient to run them. Finally, line 5 opens up an ash
shell on
the HTIF-virtualized TTY (console
, mapped to ttyHTIF
) for a connection.
Download a copy of the example inittab
using this command:
$ curl -L http://riscv.org/install-guides/linux-inittab > etc/inittab
If you would like to use getty
instead, change
line 5 to invoke that:
5 ::respawn:/bin/busybox getty 38400 ttyHTIF0
Once you've booted Linux and created the symlinks with line 4, they will persist between boots of the Linux kernel. This will cause a bunch of unsightly errors in every subsequent boot of the kernel. At the next boot, comment out line 4.
Also, we will need to create a symbolic link to /bin/busybox
for init
to work.
$ ln -s ../bin/busybox sbin/init
Add your final touches and binaries to your root disk image, and then unmount the disk image.
$ cd ..
$ sudo umount mnt
Now, we're ready to boot a most basic kernel, with a shell.
Invoke spike
, the RISC-V architectural simulator, named after the
golden spike that joined the two
tracks of the Transcontinental Railroad, and considered to be the golden model of
execution. We will need to load in the root disk image through the
+disk
argument to spike
as well. The command looks
like this:
$ spike +disk=root.bin vmlinux
vmlinux
is the name of the compiled Linux kernel
binary.
If there are no problems, an ash
prompt will
appear after the boot process completes. It will be pretty useless without the
usual plethora of command-line utilities, but you can add them as BusyBox
applets. Have fun!
To exit the simulator, hit Ctrl-C
.
Linux boot and "Hello world!"
If you want to reuse your disk image in a subsequent boot of the kernel, remember to remove (or comment out) the line that creates the symbolic links to BusyBox applets. Otherwise, it will generate several (harmless) warnings in each subsequent boot.
"Help! It doesn't work!"
I know, I've been there too. Good luck!
Optional Commands
Depending on your system, you may have to execute a few more shell commands or execute them differently. It's not too useful if you've arrived here after reading the main text of the document; it's best that you're referred here instead.
Building the Full Toolchain (7.62 SBU)
If you want to build riscv-gcc
(as
distinct from riscv-linux-gcc
), riscv-pk
, and
riscv-tests
, then simply run the full build script rather than the
abbreviated one I provided.
O$ ./build.sh
Installing a Fresh Copy of the Linux Headers
If you (or someone you know) has changed the Linux headers,
you'll need to install a new version to your system root before you build
riscv-linux-gcc
to make sure the kernel and the C library agree on
their interfaces. (Note that you'll need to pull in the Linux kernel sources
before you perform these steps. If you haven't, do so now.)
First, go to the Linux directory and perform a headers check:
O$ cd $TOP/linux-3.14.19
$ make ARCH=riscv headers_check
Once the headers have been checked, install them.
O$ make ARCH=riscv headers_install INSTALL_HDR_PATH=$SYSROOT/usr
(Substitute the path specified by INSTALL_HDR_PATH
if so desired.)
Installing riscv-linux-gcc
to a Different Directory than riscv-gcc
It may be desirable to install riscv-linux-gcc
to a different directory. If that is the case, then run these commands instead
of the ones prescribed at the end of the section:
O$ export RISCV_LINUX_GCC=[/path/to/different/directory]
O$ export PATH=$PATH:$RISCV_LINUX_GCC/bin
O$ make linux INSTALL_DIR=$RISCV_LINUX_GCC SYSROOT=$SYSROOT
First, set the environment variable
$RISCV_LINUX_GCC
to the directory in which you want the new tools
to be installed. Then, add $RISCV_LINUX_GCC/bin
to your
$PATH
environment variable. Finally, invoke make
,
specifying $RISCV_LINUX_GCC
as the target installation
directory.
Using Filesystem in Userspace (FUSE) to Create a Disk Image
If you are unable (or unwilling) to use mount
to
mount the newly-created disk image for modification, and you also have
Filesystem in Userspace (FUSE), you can use these commands to modify your disk
image.
First, create a folder as your mount point.
O$ mkdir mnt
Then, mount the disk image with FUSE. The -o +rw
option is considered experimental by FUSE developers, and may
corrupt your disk image. If you experience strange behaviors in your disk image,
you might want to delete your image and make a new one. Continuing, mount the
disk:
O$ fuseext2 -o rw+ root.bin mnt
Modify the disk image as described, but remember to unmount
the disk using FUSE, not umount
:
O$ fusermount -u mnt
Building BusyBox as a Dynamically-Linked Executable
If you want to conserve space on your root disk, or you want to support dynamically-linked binaries, you will want to build BusyBox as a dynamically-linked executable. You'll need to have these libraries:
libc.so.6
, the C libraryld.so.1
, the run-time dynamic linker
If BusyBox calls for additional libraries (e.g.
libm
), you will need to include those as well.
These were built when we compiled
riscv-linux-gcc
and were placed in $SYSROOT
. So, mount
your root disk (if not mounted already), cd into it, and copy the libraries into
lib
:
O$ cp $SYSROOT/lib/libc.so.6 lib/
O$ cp $SYSROOT/lib/ld.so.1 lib/
That's it for the libraries. Go back to the BusyBox
configuration and set BusyBox to be built as a dynamically-linked binary by
unchecking the CONFIG_STATIC
box in the menuconfig interface.
CONFIG_STATIC=n
, listed as "Build BusyBox as a static binary (no shared libs)" in BusyBox Settings → Build Options
To make things a little faster, I've used a bit of
sed
magic instead.
O$ cd $TOP/busybox-1.21.1
O$ sed -i 's/CONFIG_STATIC=y/# CONFIG_STATIC is not set/' .config
Then, rebuild and reinstall BusyBox into mnt/bin
.
O$ make -j
O$ cd $TOP/linux-3.14.19/mnt
O$ cp $TOP/busybox-1.21.1/busybox bin
Appendices
Building the Native Compiler (3.19 SBU)
GCC is a complicated beast, to say the least. As of this
writing, the configuration to build a version of RISC-V GCC that runs on
Linux/RISC-V is still located on the "native" branch of the
riscv-gcc
repository. If you desire, read on to build the native
compiler...
First, we'll need to make a bigger disk image. A size of 256 MiB seems to be plenty enough. Then, fill it with all of the goodies from the example root image.
$ cd $TOP/linux-3.14.19
$ dd if=/dev/zero of=root-gcc.img bs=1M count=256
$ mkfs.ext2 -F root-gcc.img
$ mkdir mnt-gcc
$ sudo mount -o loop root-gcc.img mnt-gcc
$ cd mnt-gcc
$ mkdir -p mkdir -p bin etc dev lib proc sbin tmp usr usr/bin usr/lib usr/sbin
$ curl -L http://riscv.org/install-guides/linux-inittab > etc/inittab
If you want a better text editor than piping
echo
to a file, now is the time to rebuild BusyBox with
vi
. These instructions will assume you've got the original BusyBox
binary. Either way, copy it into bin
, and set up the
init
symlink.
$ cp $TOP/busybox-1.21.1/busybox bin/
$ ln -s ../bin/busybox sbin/init
Now, let's build the native compiler. Change into the
riscv-gcc
repository, clean up the other things, and then
check out the "native" branch.
$ cd $TOP/riscv-tools/riscv-gcc
$ make clean
$ git checkout native
We'll need to apply a bothersome and hackish patch to GCC so
that SSIZE_MAX
is defined. I really didn't want to do this, but it
will suffice for now.
$ patch -p1 < native-patches/native-host-linux.patch
In case if you're wondering, native-patches/
also contains augmented configuration files for the MPC, GMP, and MPFR libraries
so that it will recognize riscv
as a machine. Someday, we hope that
we won't have to patch it.
Now, we are ready to build the native compiler. Run the
configure script to generate a fresh Makefile, and then invoke
make
with this command:
$ ./configure
$ make native SYSROOT=$SYSROOT
Note that we've only supplied $SYSROOT
, because
this compiler only runs within Linux/RISC-V. Therefore, we should place it along
with our other files in $SYSROOT
.
During the build process, your machine will automatically fetch the sources for GMP, MPC, and MPFR (the GNU Multi Precision Arithmetic Library, the GNU Multi Precision C Library, and the GNU Multiple Precision Floating-Point Reliably Library). It will patch the sources as previously described, and then it will automatically build them, too. (They'll be removed at the end of the native build once it's complete because they will interfere with building the "newlib" or "linux" Makefile targets.)
Once the build is complete, your binaries will be located in
$SYSROOT/usr/bin
. It would be easy enough to copy out
gcc
, but there are a bunch of other files that are needed for it to
run properly. We'll copy those now.
$ cd $TOP/linux-3.14.19/mnt-gcc
$ cp $SYSROOT/usr/bin/gcc usr/bin
$ cp $SYSROOT/usr/bin/{as,ld,readelf} usr/bin
$ cp $SYSROOT/usr/lib/crt{1,i,n}.o usr/lib
$ cp $SYSROOT/usr/lib/libc.a usr/lib
$ mkdir usr/libexec
$ cp -r $SYSROOT/usr/libexec/gcc usr/libexec
$ cp -r $SYSROOT/usr/lib/gcc usr/lib
$ cp $SYSROOT/lib/ld.so.1 lib
$ cp $SYSROOT/lib/libc.so.6 lib
$ cp $SYSROOT/lib/libdl.so.2 lib
$ cp $SYSROOT/usr/lib/libgcc_s.so* lib
$ cp -r $SYSROOT/usr/include usr/
I could tell you what the importance of these libraries are, but you can learn it yourself: try excluding any of the libraries and see if your program works at all.
Change out of the directory, unmount the root disk image, and then boot the kernel.
$ cd ..
$ sudo umount mnt-gcc
$ spike +disk=root-gcc.img vmlinux
In a short moment, the Linux kernel will boot. If you haven't done anything special to your shell or your user configuration, you'll soon be at a root prompt:
#
Use echo
, hopefully built-in to your
ash
applet, to write out a short "Hello world!" program:
# echo -e '#include <stdio.h>\n int main(void) { printf("Hello world!\\n"); return 0; }' > /tmp/hello.c
Invoke gcc
.
# gcc -o /tmp/hello /tmp/hello.c
Wait for a dreadfully long time, and then run your hello world program on Linux/RISC-V.
# /tmp/hello
Hello world!
Congratulations! You've got a native gcc
working
on your system! If
you get this far, think about it. You've used an x86 GCC compiler to compile
riscv-linux-gcc
, an x86 to RISC-V cross-compiler. Then, you used
riscv-linux-gcc
to compile the native version of GCC,
which runs on Linux/RISC-V to compile RISC-V binaries. If your head isn't
spinning from all that meta, you may consider reading Hofstadter.
"Hello world!" compiled and run on Linux/RISC-V and readelf output.
References
-
Waterman, A., Lee, Y., Patterson, D., and Asanovic, K,. "The RISC-V Instruction Set Manual," vol. II, http://inst.eecs.berkeley.edu/~cs152/sp12/handouts/riscv-supervisor.pdf, 2012.
-
Bovet, D.P., and Cesati, M. Understanding the Linux Kernel, 3rd ed., O'Reilly, 2006.
-
Gorman, M. Understanding the Linux Virtual Memory Manager, http://www.csn.ul.ie/~mel/docs/vm/guide/pdf/understand.pdf, 2003.
-
Corbet, J., Rubini, A., and Kroah-Hartman, G. Linux Device Drivers, 3rd ed., O'Reilly, 2005.
-
Beekmans, G. Linux From Scratch, version 7.3, http://www.linuxfromscratch.org/lfs/view/stable/, 2013.
RISC-V Cross-Compiler
Author : Andrew Waterman, Yunsup Lee
Date : November 11, 2011
Version : (under version control)
This is the RISC-V C and C++ cross-compiler. It supports two build modes: a generic ELF/Newlib toolchain and a more sophisticated Linux-ELF/glibc toolchain. The latter is also the basis for the RISC-V Akaros cross- compiler, which is kept separately in the Akaros repository.
Installation (Newlib)
To build the Newlib cross-compiler, pick an install path. If you choose,
say, /opt/riscv
, then add /opt/riscv/bin
to your PATH
now. Then, simply
run the following command:
./configure --prefix=/opt/riscv
make
You should now be able to use riscv-gcc and its cousins.
Installation (Linux)
To build the Linux cross-compiler, pick an install path. If you choose,
say, /opt/riscv
, then add /opt/riscv/bin
to your PATH
now. Then, simply
run the following command:
./configure --prefix=/opt/riscv
make linux
riscv-gdb
RISC-V GDB port
Build Instructions
$ mkdir build
$ cd build
$ ../configure --program-prefix=riscv- --target=riscv
$ make
$ sudo make install
Low Level Virtual Machine (LLVM)
This directory and its subdirectories contain source code for the Low Level Virtual Machine, a toolkit for the construction of highly optimized compilers, optimizers, and runtime environments.
LLVM is open source software. You may freely distribute it under the terms of
the license agreement found in LICENSE.txt
.
Please see the documentation provided in docs/
for further
assistance with LLVM, and in particular docs/GettingStarted.rst
for getting
started with LLVM and docs/README.txt
for an overview of LLVM's
documentation setup.
If you're writing a package for LLVM, see docs/Packaging.rst
for our
suggestions.
RISC-V LLVM Support
Author : Colin Schmidt (colins@eecs.berkeley.edu)
Date : February 24, 2014
Version : (under version control)
This repository contains a new target for LLVM RISC-V. It supports the latest version of the ISA 2.0. This backend currently only supports assembly generation and riscv-gcc must be used to assemble and link the executable.
The backend is structured similarly to most other LLVM backends and tries to use
the tablegen format as much as possible. The description of the instructions
are found in RISCVInstFormats.td
, and RISCVInstrInfo*.td
. The registers are
described in RISCVRegisterInfo.td
and the calling convention is described in
RISCVCallingConv.td
.
The instructions are defined using the LLVM IR DAG format, and simple
instructions that use pre-existing LLVM IR operations should be very easy to
add. The instructions are divided into separate files based on their extension,
e.g. atomic operations are defined in RISCVInstInfoA.td
. Instructions
implemented with these patterns are simply matched against the programs LLVM IR
DAG for selection. More complicated instructions can use C++ to perform custom
lowering of the LLVM IR in RISCVISelLowering.cpp
. Combining of multiple LLVM IR
nodes into single target instructions is also possible using C++ in
the same file. In general RISCVISelLowering.cpp
sets up the lowering based on
the ISA and the specific subtargets features.
This backend does not include all features of a backend but is focused on generating assembly in an extensible way such that adding new ISA extensions and using them should be relatively painless. As the RISC-V support develops the backend may provide more features.
The compiler is fairly robust with similar performance to riscv-gcc, so it use in any and all projects is encouraged.
Feedback and suggestions are welcome.
Installation
The LLVM RISCV backend is built just as the normal LLVM system.
$ git clone -b RISCV git@github.com:ucb-bar/riscv-llvm.git
$ git submodule update --init
$ mkdir build
$ cd build
$ ../configure --prefix=/opt/riscv
$ make
$ make install
Now if /opt/riscv
is on your path you should be able to use clang and LLVM with
RISC-V support.
Use
Using the llvm-riscv is fairly simple to build a full executable however you need riscv-gcc to do the assembling and linking. An example of compiling hello world:
$ cat hello.c
#include <stdio.h>
int main() {
printf("Hello World!\n");
}
$ clang -target riscv -mriscv=RV64IAMFD -S hello.c -o hello.S
$ riscv-gcc -o hello.riscv hello.S
C Language Family Front-end
Welcome to Clang. This is a compiler front-end for the C family of languages (C, C++, Objective-C, and Objective-C++) which is built as part of the LLVM compiler infrastructure project.
Unlike many other compiler frontends, Clang is useful for a number of things beyond just compiling code: we intend for Clang to be host to a number of different source level tools. One example of this is the Clang Static Analyzer.
If you're interested in more (including how to build Clang) it is best to read the relevant web sites. Here are some pointers:
Description | Website |
---|---|
Information on Clang: | http://clang.llvm.org/ |
Building and using Clang: | http://clang.llvm.org/get_started.html |
Clang Static Analyzer: | http://clang-analyzer.llvm.org/ |
Information on the LLVM project: | http://llvm.org/ |
If you have questions or comments about Clang, a great place to discuss them is
on the Clang development mailing list:
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
If you find a bug in Clang, please file it in the LLVM bug tracker:
http://llvm.org/bugs/
RISC-V ISA Simulator
Author : Andrew Waterman, Yunsup Lee
Date : June 19, 2011
Version : (under version control)
About
The RISC-V ISA Simulator implements a functional model of one or more RISC-V processors.
Build Steps
We assume that the RISCV environment variable is set to the RISC-V tools install path, and that the riscv-fesvr package is installed there.
$ mkdir build
$ cd build
$ ../configure --prefix=$RISCV --with-fesvr=$RISCV
$ make
$ [sudo] make install
Compiling and Running a Simple C Program
Install spike (see Build Steps), riscv-gcc, and riscv-pk.
Write a short C program and name it hello.c. Then, compile it into a RISC-V ELF binary named hello:
$ riscv-gcc -o hello hello.c
Now you can simulate the program atop the proxy kernel:
$ spike pk hello
Simulating a New Instruction
Adding an instruction to the simulator requires two steps:
-
Describe the instruction's functional behavior in the file riscv/insns/
.h. Examine other instructions in that directory as a starting point. -
Add the opcode and opcode mask to riscv/opcodes.h. Alternatively, add it to the riscv-opcodes package, and it will do so for you:
$ cd ../riscv-opcodes $ vi opcodes // add a line for the new instruction $ make install
-
Rebuild the simulator.
Interactive Debug Mode
To invoke interactive debug mode, launch spike with -d:
$ spike -d pk hello
To see the contents of a register (0 is for core 0):
: reg 0 a0
To see the contents of a memory location (physical address in hex):
: mem 2020
To see the contents of memory with a virtual address (0 for core 0):
: mem 0 2020
You can advance by one instruction by pressing
: until pc 0 2020 (stop when pc=2020)
: until mem 2020 50a9907311096993 (stop when mem[2020]=50a9907311096993)
Alternatively, you can execute as long as an equality is true:
: while mem 2020 50a9907311096993
You can continue execution indefinitely by:
: r
At any point during execution (even without -d), you can enter the
interactive debug mode with <control>-<c>
.
To end the simulation from the debug prompt, press <control>-<c>
or:
: q
riscv-qemu 
The riscv-softmmu target for full system emulation is currently supported. It supports booting riscv-linux (currently requires building from the qemu branch). A precompiled copy of the kernel is included in the "hacking_files" directory for convenience (see Method 1 under installation).
Installation
Method 1 (the quick way):
A sample kernel with an initramfs is included in the "hacking_files" directory. You can easily test out riscv-qemu this way:
$ git clone git@github.com:ucb-bar/riscv-qemu.git
$ cd riscv-qemu
$ git submodule update --init pixman
$ ./configure --target-list=riscv-softmmu
$ make
$ cd riscv-softmmu
$ # now, start qemu
$ ./qemu-system-riscv -kernel ../hacking_files/vmlinux/vmlinux -nographic
To exit this system, hit ctrl-a x
.
Method 2 (system with persistent storage):
Booting from a block device is also supported. A more extensive guide for configuring the kernel/building a root fs will be available soon.
Step 1:
$ git clone git@github.com:ucb-bar/riscv-qemu.git
$ cd riscv-qemu
$ git submodule update --init pixman
$ ./configure --target-list=riscv-softmmu
$ make
$ cd riscv-softmmu
Step 2:
Instructions for the following two steps are coming soon:
a) Build linux kernel from the qemu branch of riscv-linux with htif block device support.
b) Build the root.bin
root filesystem.
Step 3:
Now from the riscv-softmmu/
directory, start qemu-system-riscv
:
$ ./qemu-system-riscv -hda [your root.bin location] -kernel [your vmlinux location] -nographic
IMPORTANT: To cleanly exit this system, you must enter halt -f
at the prompt
and then hit ctrl-a x
. Otherwise, the root filesystem will likely be corrupted.
Notes
- Qemu also supports a "linux-user" mode, however this is currently not implemented for RISC-V.
- Once in a while, you may see a message from qemu of the form
main-loop: WARNING: I/O thread spun for N iterations
. You may safely ignore this message without consequence. - Files/directories of interest:
- target-riscv/
- hw/riscv/
Linux/RISC-V
This is a port of Linux kernel for the RISC-V instruction set architecture. Development is currently based on the 3.14 longterm branch.
Building the kernel image
-
Fetch upstream sources and overlay the
riscv
architecture-specific subtree:$ curl https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.15.tar.xz | tar -xJ $ cd linux-3.14.15 $ git init $ git remote add origin git@github.com:ucb-bar/riscv-linux.git $ git fetch $ git checkout -f -t origin/master
-
Populate
./.config
with the default symbol values:$ make ARCH=riscv defconfig
-
Optionally edit the configuration via an ncurses interface:
$ make ARCH=riscv menuconfig
-
Build the uncompressed kernel image:
$ make ARCH=riscv -j vmlinux
-
Boot the kernel in the functional simulator, optionally specifying a raw disk image for the root filesystem:
$ spike +disk=path/to/root.img vmlinux
Exporting kernel headers
The riscv-gcc
repository includes a copy of the kernel header files.
If the userspace API has changed, export the updated headers to the
riscv-gcc
source directory:
$ make ARCH=riscv headers_check
$ make ARCH=riscv INSTALL_HDR_PATH=path/to/riscv-gcc/linux-headers headers_install
Rebuild riscv-gcc
with the linux
target:
$ make INSTALL_DIR=path/to/install linux
RISC-V Proxy Kernel
About
The RISC-V proxy kernel is a thin layer that services system calls generated by code built and linked with the RISC-V newlib port.
Build Steps
We assume that the RISCV environment variable is set to the RISC-V tools install path, and that the riscv-gcc package is installed.
$ mkdir build
$ cd build
$ CC=riscv-gcc ../configure --prefix=$RISCV/riscv-elf --host=riscv
$ make
$ make install
RISC-V Frontend Server
About
This repository contains the front-end server library, which facilitates communication between a host machine and a RISC-V target machine. It is usually not meant to be used as a standalone package.
Build Steps
Execute the following commands to install the library, assuming you've declared the RISCV environment variable to point to the RISC-V install path:
$ mkdir build
$ cd build
$ ../configure --prefix=$RISCV
$ make install
riscv-tests
About
This repository hosts unit tests for RISC-V processors.
Building from repository
We assume that the RISCV environment variable is set to the RISC-V tools install path, and that the riscv-gcc package is installed.
$ git clone https://github.com/ucb-bar/riscv-tests
$ cd riscv-tests
$ git submodule update --init --recursive
$ autoconf
$ ./configure --prefix=$RISCV/target
$ make
$ make install
The rest of this document describes the format of test programs for the RISC-V architecture.
Test Virtual Machines
To allow maximum reuse of a given test, each test program is constrained to only use features of a given test virtual machine or TVM. A TVM hides differences between alternative implementations by defining:
- The set of registers and instructions that can be used.
- Which portions of memory can be accessed.
- The way the test program starts and ends execution.
- The way that test data is input.
- The way that test results are output.
The following table shows the TVMs currently defined for RISC-V. All of these TVMs only support a single hardware thread.
TVM Name | Description |
---|---|
rv32ui |
RV32 user-level, integer only |
rv32si |
RV32 supervisor-level, integer only |
rv64ui |
RV64 user-level, integer only |
rv64uf |
RV64 user-level, integer and floating-point |
rv64uv |
RV64 user-level, integer, floating-point, and vector |
rv64si |
RV64 supervisor-level, integer only |
rv64sv |
RV64 supervisor-level, integer and vector |
A test program for RISC-V is written within a single assembly language file,
which is passed through the C preprocessor, and all regular assembly
directives can be used. An example test program is shown below. Each test
program should first include the riscv test.h
header file, which defines the
macros used by the TVM. The header file will have different contents depending
on the target environment for which the test will be built. One of the goals
of the various TVMs is to allow the same test program to be compiled and run
on very different target environments yet still produce the same results. The
following table shows the target environment currently defined.
Target Environment Name | Description |
---|---|
p |
virtual memory is disabled, only core 0 boots up |
pm |
virtual memory is disabled, all cores boot up |
pt |
virtual memory is disabled, timer interrupt fires every 100 cycles |
v |
virtual memory is enabled |
Each test program must next specify for which TVM it is designed by including
the appropriate TVM macro, RVTEST_RV64U
in this example. This specification
can change the way in which subsequent macros are interpreted, and supports
a static check of the TVM functionality used by the program.
The test program will begin execution at the first instruction after
RVTEST_CODE_BEGIN
, and continue until execution reaches an RVTEST_PASS
macro or the RVTEST_CODE_END
macro, which is implicitly a success. A test
can explicitly fail by invoking the RVTEST_FAIL
macro.
The example program contains self-checking code to test the result of the add. However, self-checks rely on correct functioning of the processor instructions used to implement the self check (e.g., the branch) and so cannot be the only testing strategy.
All tests should also contain a test data section, delimited by
RVTEST_DATA_BEGIN
and RVTEST_DATA_END
. There is no alignment guarantee for
the start of the test data section, so regular assembler alignment
instructions should be used to ensure desired alignment of data values. This
region of memory will be captured at the end of the test to act as a signature
from the test. The signature can be compared with that from a run on the
golden model.
Any given test environment for running tests should also include a timeout facility, which will class a test as failing if it does not successfully complete a test within a reasonable time bound.
#include "riscv_test.h"
RVTEST_RV64U # Define TVM used by program.
# Test code region.
RVTEST_CODE_BEGIN # Start of test code.
lw x2, testdata
addi x2, 1 # Should be 42 into $2.
sw x2, result # Store result into memory overwriting 1s.
li x3, 42 # Desired result.
bne x2, x3, fail # Fail out if doesn't match.
RVTEST_PASS # Signal success.
fail:
RVTEST_FAIL
RVTEST_CODE_END # End of test code.
# Input data section.
# This section is optional, and this data is NOT saved in the output.
.data
.align 3
testdata:
.dword 41
# Output data section.
RVTEST_DATA_BEGIN # Start of test output data region.
.align 3
result:
.dword -1
RVTEST_DATA_END # End of test output data region.
User-Level TVMs
Test programs for the rv32u*
and rv64u*
TVMs can contain all instructions
from the respective base user-level ISA (RV32 or RV64), except for those with
the SYSTEM major opcode (syscall, break, rdcycle, rdtime, rdinstret). All user
registers (pc, x0-x31, f0-f31, fsr) can be accessed.
The rv32ui
and rv64ui
TVMs are integer-only subsets of rv32u
and rv64u
respectively. These subsets can not use any floating-point instructions (major
opcodes: LOAD-FP, STORE-FP, MADD, MSUB, NMSUB, NMADD, OP-FP), and hence cannot
access the floating-point register state (f0-f31 and fsr). The integer-only
TVMs are useful for initial processor bringup and to test simpler
implementations that lack a hardware FPU.
Note that any rv32ui
test program is also valid for the rv32u
TVM, and
similarly rv64ui
is a strict subset of rv64u
. To allow a given test to run
on the widest possible set of implementations, it is desirable to write any
given test to run on the smallest or least capable TVM possible. For example,
any simple tests of integer functionality should be written for the rv64ui
TVM, as the same test can then be run on RV64 implementations with or without a
hardware FPU. As another example, all tests for these base user-level TVMs will
also be valid for more advanced processors with instruction-set extensions.
At the start of execution, the values of all registers are undefined. All branch and jump destinations must be to labels within the test code region of the assembler source file. The code and data sections will be relocated differently for the various implementations of the test environment, and so test program results shall not depend on absolute addresses of instructions or data memory. The test build environment should support randomization of the section relocation to provide better coverage and to ensure test signatures do not contain absolute addresses.
Supervisor-Level TVMs
The supervisor-level TVMs allow testing of supervisor-level state and
instructions. As with the user-level TVMs, we provide integer-only
supervisor-level TVMs indicated with a trailing i
.
History and Acknowledgements
This style of test virtual machine originated with the T0 (Torrent-0) vector
microprocessor project at UC Berkeley and ICSI, begun in 1992. The main
developers of this test strategy were Krste Asanovic and David Johnson. A
precursor to torture
was rantor
developed by Phil Kohn at ICSI.
A variant of this testing approach was also used for the Scale vector-thread processor at MIT, begun in 2000. Ronny Krashinsky and Christopher Batten were the principal architects of the Scale chip. Jeffrey Cohen and Mark Hampton developed a version of torture capable of generating vector-thread code.
fpga-zynq
This repo contains the software and instructions necessary to run a RISC-V Rocket Core on various Zynq-based boards.
Setup
First, enter the directory that corresponds to the board you intend to use. The currently available options are: zybo.
Method 1 (Quick):
Step 1: Copy everything from REPO/[your board name]/sd_image
to the root of
your SD card.
Step 2: Copy everything from the REPO/common/sd_image
directory to the root of
your SD card.
Jump to the "Booting Up and interacting with the RISC-V Rocket core" section to continue.
Method 2 (Build From Scratch):
You may also build the system from scratch. These instructions have been tested with Vivado 2013.4 and Xilinx SDK 2013.4 and are designed for the Zybo. You can adapt these for different boards in the repo by changing the relevant parts of filenames in the instructions.
As a quick summary, here's what the Zybo will do when booting:
1) Impact the Rocket Core onto the zynq's programmable logic and configure the
on-board ARM core. These come from the system_wrapper.bit
file, which will be
part of boot.bin
on the sd card.
2) Run the First Stage BootLoader (FSBL), which is part of boot.bin which we will eventually create.
3) The FSBL will start u-Boot, which sets up the environment for linux on the ARM core.
4) Finally, u-Boot will start our linux image (uImage) on the ARM core. This requires a custom devicetree.dtb file, which we will compile shortly.
We'll now go ahead and build each of these components.
Step 0:
Make sure you've sourced the settings for Vivado as necessary on your system.
Step 1: Building the Vivado Project
$ git clone git@github.com:ucb-bar/fpga-zynq.git
$ cd fpga-zynq/zybo/hw
$ vivado -mode tcl -source zynq_rocketchip.tcl
$ cd zynq_rocketchip
$ vivado ./zynq_rocketchip.xpr
Once in Vivado, first hit "Generate Block Design" -> system.bd, then hit "Open Block Design" -> system.bd.
Then, run Generate Bitstream (it should step through Synthesis and Implementation automatically). While this is running, you can go ahead and follow steps 3 and 4 to build u-boot and Linux (but not the FSBL).
Once bitstream generation in vivado finishes, make sure that "Open Implemented Design" is selected and hit "OK". After the implemented design is open, we can export the project to the Xilinx SDK. To do so, hit File -> Export -> "Export Hardware for SDK...".
In the dialog that pops up, make sure that Source is selected as "system.bd".
You can leave "Export to" and "Workspace" as their defaults, which should be
<Local to Project>
. The rest of the tutorial assumes that this is the case.
Finally, make sure that "Export Hardware", "Include bitstream", and "Launch SDK"
are all checked. Then click OK. Once the SDK launches, feel free to exit Vivado.
In case the SDK fails to launch, the command to do so is:
$ xsdk -bit [ZYBO REPO LOCATION]/hw/zynq_rocketchip/zynq_rocketchip.sdk/SDK/SDK_Export/hw/system_wrapper.bit -workspace [ZYBO REPO LOCATION]/hw/zynq_rocketchip/zynq_rocketchip.sdk/SDK/SDK_Export -hwspec [ZYBO_REPO_LOCATION]/hw/zynq_rocketchip/zynq_rocketchip.sdk/SDK/SDK_Export/hw/system.xml
Step 2: Building the First-Stage BootLoader (FSBL) in Xilinx SDK
Now, we'll use the SDK to build the FSBL:
Inside the SDK, hit File -> New -> Project. In the window that pops up, expand the Xilinx node in the tree of options and click "Application Project".
In the following window, enter "FSBL" as the Project name and ensure that
Hardware Platform under Target Hardware is set to hw_platform_0
and that the
Processor is set to ps7_cortexa9_0
. Under Target Software, ensure that OS
Platform is set to standalone, Language is set to C, and Board Support Package
is set to Create New, with "FSBL_bsp" as the name.
After you hit next, you'll be presented with a menu with available templates.
Select "Zynq FSBL" and click finish. Once you do so, you should see compilation
in progress in the console. Wait until "Build Finished" appears. Do not exit
Xilinx SDK yet, we'll come back to it in a couple of steps to package up
boot.bin
.
Step 3: Build u-boot for the zybo:
First, you'll want to grab the u-boot source for the Zybo. Since things tend to shift around in Digilent's repos, we've made a fork to ensure a stable base for this guide. This copy of the repo also contains a modified copy of the default zybo u-boot configuration to support the RISC-V Rocket core. The repo is already present as a submodule, so you'll need to initialize it (this will also download the linux source for you, which you'll need in Step 5):
[From inside the zybo repo]
$ git submodule update --init
Next, we'll go ahead and build u-boot. To do so, enter the following:
$ cd u-boot-Digilent-Dev
$ make CROSS_COMPILE=arm-xilinx-linux-gnueabi- zynq_zybo_config
$ make CROSS_COMPILE=arm-xilinx-linux-gnueabi-
Once the build completes, the file we need is called "u-boot", but we need to rename it. Run the following to do so:
$ mv u-boot u-boot.elf
Step 4: Create boot.bin
At this point, we'll package up the 3 main files we've built so far:
system_wrapper.bit
, the FSBL binary, and the u-boot.elf
binary. To do so,
return to the Xilinx SDK and select "Create Zynq Boot Image" from the
"Xilinx Tools" menu.
In the window that appears, choose an output location for the boot.bif
file
we'll create. Do not check "Use Authentication" or "Use Encryption". Also set
an output path for boot.bin
at the bottom (it will likely be called
output.bin
by default). Now, in the Boot Image Partitions area, hit Add.
In the window that pops up, ensure that Partition type is set as bootloader and
click browse. The file we're looking for is called FSBL.elf
. If you've followed
these instructions exactly, it will be located in:
[ZYBO REPO LOCATION]/hw/zynq_rocketchip/zynq_rocketchip.sdk/SDK/SDK_Export/FSBL/Debug/FSBL.elf
Once you've located the file, click OK.
Next we'll add the bit file containing our design (with the Rocket core). Click add once again. This time, Partition type should be set as datafile. Navigate to the following directory to choose your bit file:
[ZYBO REPO LOCATION]/hw/zynq_rocketchip/zynq_rocketchip.sdk/SDK/SDK_Export/hw_platform_0/system_wrapper.bit
Once you've located the file, click OK.
Finally, we'll add in u-boot. Again, click Add and ensure that Partition type
is set to datafile in the window that appears. Now, click browse and select
u-boot.elf
located in:
[ZYBO REPO LOCATION]/u-boot-Digilent-Dev/u-boot.elf
Once you've located the file, click OK and hit Create Image in the Create Zynq Boot Image window. Copy the generated boot.bin file to your sd card. You may now close the Xilinx SDK window.
Step 5: Build Linux Kernel uImage
Next, we'll move on to building the Linux Kernel uImage. Again, to keep our
dependencies in-order, we've created a fork of the Digilent linux repo and
added a modified dts file to support Rocket's Host-Target Interface (HTIF).
When you ran git submodule update --init
back in step 3, the source for
linux was pulled into [ZYBO REPO LOCATION]/Linux-Digilent-Dev
. Enter that
directory and do the following to build linux:
$ make ARCH=arm CROSS_COMPILE=arm-xilinx-linux-gnueabi- xilinx_zynq_defconfig
$ make ARCH=arm CROSS_COMPILE=arm-xilinx-linux-gnueabi-
This will produce a zImage file, however we need a uImage. In order to build a
uImage, the Makefile needs to be able to run the mkimage
program found in
u-boot-Digilent-Dev/tools
. Thus, we'll have to add it to our path:
$ export PATH=$PATH:[ZYBO REPO LOCATION]/u-boot-Digilent-Dev/tools
$ make ARCH=arm CROSS_COMPILE=arm-xilinx-linux-gnueabi- UIMAGE_LOADADDR=0x8000 uImage
This will produce the file
[ZYBO REPO LOCATION]/Linux-Digilent-Dev/arch/arm/boot/uImage
. Copy this file
to your sdcard.
Next, we'll compile our device tree, so that the kernel knows where our devices
are. The copy of linux in this repo already contains a modified device tree to
support the HTIF infrastructure that Rocket needs. To compile it, run the
following from inside [ZYBO REPO LOCATION]/Linux-Digilent-Dev/
.
$ ./scripts/dtc/dtc -I dts -O dtb -o devicetree.dtb arch/arm/boot/dts/zynq-zybo.dts
This will produce a devicetree.dtb
file, which you should copy to your sdcard.
Finally, copy the uramdisk.image.gz file from REPO/common/sd_image/
to your sd card. This is the root filesystem for ARM linux, which contains a
copy of riscv-fesvr (frontend server) that will interact with our Rocket core
on the programmable logic. If you'd like to modify this ramdisk, see Appendix A
at the bottom of this document.
At this point, there should be 4 files on your sd card. Continue to the "Booting Up and interacting with the RISC-V Rocket core" section.
Booting Up and Interacting with the RISC-V Core
Finally, copy the REPO/common/sd_image/riscv directory
to your sd card. This
contains a copy of the linux kernel compiled for RISC-V, along with an
appropriate root filesystem. At this point, the directory structure of your sd
card should match the following:
SD_ROOT/
|-> riscv/
|-> root_spike.bin
|-> vmlinux[_nofpu]
|-> boot.bin
|-> devicetree.dtb
|-> uImage
|-> uramdisk.image.gz
Insert the microsd card in your zybo. At this point you have two options for logging in:
1) USB-UART
To connect over usb, do the following (the text following tty. may vary):
$ screen /dev/tty.usbserial-210279575138B 115200,cs8,-parenb,-cstopb
2) Telnet
You may also connect over telnet using ethernet. The default IP address is
192.168.192.5
:
$ telnet 192.168.192.5
In either case, you'll eventually be prompted to login to the ARM system. Both the
username and password are root
.
Once you're in, you'll need to mount the sd card so that we can access the files necessary to boot linux on the Rocket core. Do the following:
$ mkdir /sdcard
$ mount /dev/mmcblk0p1 /sdcard
Finally, we can go ahead and boot linux on the Rocket core:
$ ./fesvr-zedboard +disk=/sdcard/riscv/root_spike.bin /sdcard/riscv/vmlinux_nofpu
Appendices
Appendix A: Modifying the rootfs:
The RAMDisk that holds linux (uramdisk.image.gz) is a gzipped cpio archive with a u-boot header for the zybo. To open it up (will need sudo):
$ dd if=sd_image/uramdisk.image.gz bs=64 skip=1 of=uramdisk.cpio.gz
$ mkdir ramdisk
$ gunzip -c uramdisk.cpio.gz | sudo sh -c 'cd ramdisk/ && cpio -i'
When changing or adding files, be sure to keep track of owners, groups, and permissions. When you are done, to package it back up:
$ sh -c 'cd ramdisk/ && sudo find . | sudo cpio -H newc -o' | gzip -9 > uramdisk.cpio.gz
$ mkimage -A arm -O linux -T ramdisk -d uramdisk.cpio.gz uramdisk.image.gz
Appendix B: Building Slave.v from reference-chip
The copy of FPGATop.v in this repo was built from https://github.com/ucb-bar/reference-chip/tree/ef19ba3aeb733c1f370055848733ee55148c17c2
The following changes were made to allow the design to fit on the Zybo:
1) Remove the L2 Cache
2) No FPU
About Chisel
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages.
Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference.
Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
Visit the community website for more information.
Getting started
Chisel Users
To start working on a circuit with Chisel, create simple build.sbt and scala source file containing your Chisel code as follow.
$ cat build.sbt
scalaVersion := "2.10.2"
addSbtPlugin("com.github.scct" % "sbt-scct" % "0.2.1")
libraryDependencies += "edu.berkeley.cs" %% "chisel" % "latest.release"
(You want your build.sbt file to contain a reference to Scala version greater or equal to 2.10 and a dependency on the Chisel library.)
Edit the source files for your circuit
$ cat Hello.scala
import Chisel._
class HelloModule extends Module {
val io = new Bundle {}
printf("Hello World!\n")
}
class HelloModuleTests(c: HelloModule) extends Tester(c) {
}
object hello {
def main(args: Array[String]): Unit = {
chiselMainTest(Array[String]("--backend", "c", "--testing", "--genHarness"),
() => Module(new HelloModule())){c => new HelloModuleTests(c)}
}
}
At this point you will need to download and install sbt for your favorite distribution. You will need sbt version 0.13.0 or higher because recent versions of sbt generate jars without the scala third-point version number (i.e. chisel_2.10-2.0.2.jar instead of chisel_2.10.2-2.0.2.jar).
Execute sbt run to generate the C++ simulation source for your circuit
$ sbt run
Compile the resulting C++ output to generate a simulation executable
$ g++ -std=c++11 -o HelloModule HelloModule.cpp HelloModule-emulator.cpp
Run the simulation executable for one clock cycle to generate a simulation trace
$ ./HelloModule 1
Hello World!
Going further, you should read on the sbt directory structure to organize your files for bigger projects. SBT is the "official" build system for Scala but you can use any other Java build system you like (Maven, etc).
Chisel is implemented 100% in Scala!
Chisel developers
Before you generate a pull request, run the following commands to insure all unit tests (with code coverage) pass and to check for coding style compliance respectively.
$ sbt scct:test
$ sbt scalastyle
You can follow Chisel metrics on style compliance and code coverage on the website.
If you are debugging an issue in a third-party project which depends on the Chisel jar, first check that the chisel version in your chisel code base and in the third-party project library dependency match. After editing the chisel code base, delete the local jar cache directory to make sure you are not picking up incorrect jar files, then publish the Chisel jar locally and remake your third-party project. Example:
$ cat *srcTop*/chisel/build.sbt
...
version := "2.3-SNAPSHOT"
...
$ cat *srcTop*/riscv-sodor/project/build.scala
...
libraryDependencies += "edu.berkeley.cs" %% "chisel" % "2.3-SNAPSHOT"
...
$ rm -rf ~/.sbt ~/.ivy2
$ cd *srcTop*/chisel && sbt publish-local
$ cd *srcTop*/riscv-sodor && make run-emulator
Publishing to public Maven repo:
$ diff -u build.sbt
-version := "2.3-SNAPSHOT"
+version := "2.3"
$ sbt publish-signed
Making the Chisel jar file with Maven (>=3.0.4)
Some of the library jars in /lib are not available in the central Maven repository. They can be installed to your local repo using the script 'install_maven_libs':
$ ./install_maven_libs
Build Chisel with:
$ mvn install
Two maven profiles to skip the tests or ignore failures respectively
$ mvn install -Pskip
$ mvn install -Pignore
riscv-angel
ANGEL is a Javascript RISC-V ISA (RV64) Simulator that runs riscv-linux with BusyBox.
Check out the demo running at: http://riscv.org/angel/
Launch ANGELRocket Core Generator
Rocket is a 6-stage single-issue in-order pipeline that executes the 64-bit scalar RISC-V ISA. Rocket implements an MMU that supports page-based virtual memory and is able to boot modern operating systems such as Linux. Rocket also has an optional IEEE 754-2008-compliant FPU, which implements both single- and double-precision floating-point operations, including fused multiply-add.
We plan to open-source our Rocket core generator written in Chisel in the near future. We are currently in the process of cleaning up the repository. Please stay tuned.
Currently, a Rocket core with an 8 KB direct-mapped L1 instruction cache and an 8 KB direct-mapped L1 data cache has been instantiated and committed to the fpga-zynq infrastructure repository. A copy of the generated Verilog is available here.
The following table compares a 32-bit ARM Cortex-A5 core to a 64-bit RISC-V Rocket core built in the same TSMC process (40GPLUS). Fourth column is the ratio of RISC-V Rocket to ARM Cortex-A5. Both use single-instruction-issue, in-order pipelines, yet the RISC-V core is faster, smaller, and uses less power.
ISA/Implementation | ARM Cortex-A5 | RISC-V Rocket | R/A |
---|---|---|---|
ISA Register Width | 32 bits | 64 bits | 2 |
Frequency | >1 GHz | >1 GHz | 1 |
Dhrystone Performance | 1.57 DMIPS/MHz | 1.72 DMIPS/MHz | 1.1 |
Area excluding caches | 0.27 mm2 | 0.14 mm2 | 0.5 |
Area with 16KB caches | 0.53 mm2 | 0.39 mm2 | 0.7 |
Area Efficiency | 2.96 DMIPS/MHz/mm2 | 4.41 DMIPS/MHz/mm2 | 1.5 |
Dynamic Power | <0.08 mW/MHz | 0.034 mW/MHz | >= 0.4 |
About The Sodor Processor Collection
Author : Christopher Celio (celio@eecs.berkeley.edu)
Author : Eric Love
Date : 2014 May 6
This repo has been put together to demonstrate a number of simple RISC-V integer pipelines written in Chisel:
- 1-stage (essentially an ISA simulator)
- 2-stage (demonstrates pipelining in Chisel)
- 3-stage (uses sequential memory)
- 5-stage (can toggle between fully bypassed or fully interlocked)
- "bus"-based micro-coded implementation
All of the cores implement the RISC-V 32b integer base user-level ISA (RV32I) version 2.0. Only the 1-stage and 3-stage implement a minimal version of the supervisor mode (RV32IS), enough to execute the RISC-V proxy kernel (riscv-pk).
All processors talk to a simple scratchpad memory (asynchronous, single-cycle), with no backing outer memory (the 3-stage is the exception - its scratchpad is synchronous). Programs are loaded in via a Host-target Interface (HTIF) port (while the core is kept in reset), effectively making the scratchpads 3-port memories (instruction, data, HTIF).
This repository is set up to use the C++-backend of Chisel to generate and run the Sodor emulators. Users wishing to use the Verilog-backend will need to write their own testharness and glue code to interface with their own tool flows.
This repo works great as an undergraduate lab (and has been used by Berkeley's CS152 class for 3 semesters and counting). See doc/ for an example, as well as for some processor diagrams.
Getting the repo
$ git clone https://github.com/ucb-bar/riscv-sodor.git
Building the processor emulators
Because this repository is designed to be used as RISC-V processor examples written in Chisel (and a regressive testsuite for Chisel updates), no external RISC-V tools are used (with the exception of the RISC-V front-end server). The assumption is that riscv-gcc is not available on the local system. Thus, RISC-V unit tests and benchmarks were compiled and committed to the sodor repository in the ./install directory (as are the .dump files).
Install the RISC-V front-end server to talk between the host and RISC-V target processors.
$ git clone https://github.com/ucb-bar/riscv-fesvr.git
$ cd riscv-fesvr
$ ./configure --prefix=/usr/local
$ make install
Build the sodor emulators
$ git clone https://github.com/ucb-bar/riscv-sodor.git
$ cd riscv-sodor
$ ./configure --with-riscv=/usr/local
$ make
Install the executables on the local system
$ make install
Clean all generated files
$ make clean
(Although you can set the prefix to any directory of your choice, they must be the same directory for both riscv-fesvr and riscv-sodor).
(Alternative) Build together with Chisel sources
This repository packages SBT (Scala Built Tool) for convenience. By default SBT will fetch the Chisel package specified in project/build.scala.
If you are a developer on Chisel and are using sodor cores to test your changes to the Chisel repository, it is convienient to rebuild the Chisel package before the sodor cores. To do that, fetch the Chisel repo from github and pass the path to the local Chisel source directory to the configure script.
$ git clone https://github.com/ucb-bar/chisel.git
$ cd riscv-sodor
$ ./configure --with-riscv=/usr/local --with-chisel=../chisel
$ make
Creating a source release package
$ make dist-src
Running the RISC-V tests
$ make run-emulator
Gathering the results
(all) $ make reports
(cpi) $ make reports-cpi
(bp) $ make reports-bp
(stats) $ make reports-stats
(Optional) Running debug version to produce signal traces
$ make run-emulator-debug
When run in debug mode, all processors will generate .vcd information (viewable by your favorite waveform viewer). NOTE: The current build system assumes that the user has "vcd2vpd" installed. If not, you will need to make the appropriate changes to emulator/common/Makefile.include to remove references to "vcd2vpd".
All processors can also spit out cycle-by-cycle log information: see emulator/common/Makefile.include and add a "+verbose" to the emulator binary arguments list. WARNING: log files may become very large!
By default, assembly tests already use "+verbose" and the longer running benchmarks do not. See the Makefile rule "run-bmarks: $(global_bmarks_outgz)..." which, if uncommented, will run the benchmarks in log mode and save the output to a .gz file (you can use "zcat vvadd.out.gz | vim -" to read these files easily enough, if vim is your thing).
Although already done for you by the build system, to generate .vcd files, see ./Makefile to add the "--vcd" flag to Chisel, and emulator/common/Makefile.include to add the "-v${vcdfilename}" flag to the emulator binary. Currently, the .vcd files are converted to .vpd files and then the .vcd files are deleted. If you do not have vcd2vpd, you will want to remove references to vcd2vpd in emulator/common/Makefile.include.
The 1-stage and 3-stage can run the bmarks using the proxy-kernel (pk), which allows it to trap and emulate illegal instructions (e.g., div/rem), and allows the use of printf from within the bmark application! (This assumes the benchmarks have been compiled for use on a proxy kernel. For example, bare metal programs begin at PC=0x2000, whereas the proxy kernel expects the benchmark's main to be located at 0x10000. This is controlled by the tests/riscv-bmarks/Makefile SUPERVISOR_MODE variable).
Have fun!
FAQ
What is the goal of these cores?
First and foremost, to provide a set of easy to understand cores that users can easily modify and play with. Sodor is useful both as a quick introduction to the RISC-V ISA and to the hardware construction language Chisel.
Are there any diagrams of these cores?
Diagrams of the 1-stage, 2-stage, and ucode can be found in doc/, at the end of the lab handout. A more comprehensive write-up on the micro-code implementation can be found at the CS152 website.
How do I generate Verilog code for use on a FPGA?
The Sodor repository is set up to use the C++-backend of Chisel to generate and run the Sodor emulators. Users wishing to use the Verilog-backend will unfortunately need to write their own testharness and glue code to interface with their own tool flows.
Why no Verilog?
In a past iteration, Sodor has used Synopsys's VCS and DirectC to provide a Verilog flow for Verlog RTL simulation. However, as VCS/DirectC is not freely available, it was not clear that committing Verilog code dependent on a proprietary simulation program was a good idea.
How can I generate Verilog myself?
You can generate the Verilog code by modifying the Makefile in emulator/common/Makefile.include. In the CHISEL_ARGS variable, change "--backend c" to "--backend v". This will dump a Top.v verilog file of the core and its scratchpad memory (corresponding to the Chisel module named "Top") into the location specified by "--targetDir" in CHISEL_ARGS.
Once you have the Top.v module, you will have to write your own testharness and glue code to talk to Top.v. The main difficulty here is that you need to link the riscv-fesvr to the Sodor core via the HTIF link ("host-target interface"). This allows the fesvr to load a binary into the Sodor core's scratchpad memory, bring the core out of reset, and communicate with the core while it's running to handle any syscalls, error conditions, or test successful/end conditions.
This basically involves porting emulator/*/emulator.cpp to Verilog. I recommend writing a Verilog testharness that interfaces with the existing C++ code (emulator/common/htif_emulator.cc, etc.). emulator/common/htif_main.cc shows an example stub that uses Synopsys's DirectC to interface between a Verilog test-harness and the existing C++ code.
TODO
Here is an informal list of things that would be nice to get done. Feel free to contribute!
- Update the 3-stage to optionally use Princeton mode.
- Add stat information back in (e.g., print out the CPI, preferably leveraging the uarch-counters).
- Use the newest riscv-test benchmarks, which provide printf (but require syscall support) and dump out the uarch counter state.
- Use the riscv-dis binary to provide diassembly support (instead of using Chisel RTL, which is expensive), which is provided by the riscv-isa-run repository.
- Provide a Verilog test harness, and put the 3-stage on a FPGA.