Design and Implementation of AI in Games
Improving Real-Time GPU-Based Image Blur Algorithms – Gaussian Blur
Improving Real-Time GPU-Based Image Blur Algorithms – Kawase Blur and Moving Box Averages
Properly Detecting Intel® Software Guard Extensions in Your Applications
The Intel® Software Guard Extensions (Intel® SGX) SDK provides three functions for detecting and enabling Intel SGX support on systems. The CPUID instruction also provides an interface for detecting Intel SGX support on a CPU. The key question for software developers is: what is the proper way to detect Intel SGX support on a system so that their applications and their installers behave accordingly?
On the surface, this seems like a simple question. If the processor supports Intel SGX, the application can safely bring up enclaves and use them for protecting secrets from other software that is running on the system. If the processor does not support Intel SGX, the developer can choose to cease function on the application (and present an error to the user), or fall back to a non-Intel SGX code path.
That sounds simple, but the reality is far more complicated. To understand why, it’s necessary to understand the details behind Intel SGX support in a CPU, how that’s managed, and how it’s reported to applications.
Intel® Software Guard Extensions Support on a Platform
While an individual CPU may support Intel SGX, whether or not Intel SGX is actually available for use depends on two components:
- BIOS
- Intel SGX Platform Software package
We will discuss both of these in detail below.
BIOS Support
BIOS support is required for SGX to provide the capability to enable and configure the SGX feature in the system.
The system owner must opt in to Intel SGX by enabling it via the BIOS. This requires a BIOS from the OEM that explicitly supports Intel SGX. The support provided by the BIOS can very OEM to OEM and even across an OEM’s product lines.
There are three possible BIOS settings.
Setting | Meaning |
Enabled | Intel SGX is enabled and available for use in applications. |
Software Controlled | Intel SGX can be enabled by software applications, but it is not available until this occurs (called the “software opt-in”). Enabling Intel SGX via software opt-in may require a system reboot. |
Disabled | Intel SGX is explicitly disabled and it cannot be enabled through software applications. This setting can only be changed in the BIOS setup screen. |
Note: Depending on your BIOS, you may only have the Enabled and Disabled options. Check with your device manufacturer.
When Intel SGX is set to Enabled in the BIOS, Intel SGX has been enabled, and Intel SGX instructions and resources are available to applications.
When Intel SGX is set to Software Controlled, Intel SGX is initially disabled until enabled via a software application that makes one of the following calls in the SDK:
sgx_enable_device()
sgx_cap_enable_device()
These functions perform the software opt-in, and are described in more detail below. Intel’s recommendation to OEMs and ODMs is to provide the Software Controlled mode and make this the default setting.
When Intel SGX is set to Disabled, it is explicitly disabled and cannot be enabled via software. To enable Intel SGX, the end user must either:
- Set it back to the Enabled state in the BIOS.
or
- Set it to the Software Controlled state in the BIOS (at which point, Intel SGX is still disabled until it is enabled via a software application).
What is the point of the Software Controlled state?
Intel SGX reserves up to 128 MB of system RAM as Processor Reserved Memory (PRM), which is used to hold the Enclave Page Cache (EPC). While its exact size is determined by the BIOS settings, it is important to note that enabling Intel SGX consumes a portion of the system’s resources, effectively making them unavailable to other applications.
The Software Controlled setting in the BIOS allows OEMs to ship systems with support for Intel SGX in a ready state, where it can be activated via software (this is the software opt-in). This is a compromise between having Intel SGX fully enabled by default and potentially consuming system resources even when no Intel SGX software is present on the system, and having it turned off completely. Allowing the activation to occur via software eliminates the need for end users to boot their systems into their BIOS setup screens and manually enable Intel SGX via that interface, a potentially daunting task for non-technical users.
The Platform Software Package
For Intel SGX to function, the Intel SGX Platform Software package, or PSW, must be installed on the system. The PSW includes:
- Runtime libraries
- Services that support and maintain the trusted compute block on the end user’s system
- Services that perform and manage critical Intel SGX operations such as attestation
- Interfaces to platform services such as the trusted time and monotonic counters
The PSW is installed by the software vendor as part of the application installation procedure. It is up to the software vendor’s application installer to detect whether or not the platform supports Intel SGX, and, if so, run the PSW installer.
If the Platform Software is already installed on the system the PSW installation package will either upgrade the existing installation (if the existing installation is older), or exit without taking further action.
Windows* platforms
These two libraries are installed into the system directory on Windows platforms:
sgx_uae_service.dll
sgx_urts.dll
An application needs to load the DLLs in this order and ensure that it loads them from the system directory. It is recommended that applications call GetSystemDirectory()
to obtain the system directory path, and SetDllDirectory()
to set the DLL search path to that directory, prior to calling LoadLibrary()
. Failing to restrict the load path to the system directory will leave the application vulnerable to DLL preloading attacks.
Linux* platforms
This document will be updated with the Linux procedure shortly after the Linux SDK is made available.
What about CPUID?
The CPUID instruction is not sufficient to detect the usability of Intel SGX on a platform. It can only report whether or not the processor supports the Intel SGX instructions; Intel SGX usability depends on both the BIOS settings and the PSW. Applications that make decisions based solely on CPUID enumeration run the risk of generating a #GP or #UD fault at runtime.
In addition, VMMs (e.g., Hyper-V*) can mask CPUID results and thus a system may support Intel SGX even though the results of the CPUID report that the Intel SGX feature flag is not set.
The Intel® Software Guard Extensions Feature Detection Procedure
When developing an application, Intel SGX detection must occur both within the application at runtime and within the application installer. Each has its own procedure.
Intel® Software Guard Extensions Detection: The Installer
All Intel SGX applications are required to install the PSW. It does not make sense to install the PSW on a system that cannot support Intel SGX. Application installers must check the local system for Intel SGX capability before attempting to install the PSW, and must
Figure 1.Intel SGX feature detection flowchart for installers.
also complete the software opt-in to enable Intel SGX. The recommended procedure is shown in Figure 1.
- Call
sgx_is_capable()
. If the system is Intel SGX-capable, go to step 2. If not, do not install the PSW and do not attempt the software opt-in. It is up to the application vendor to decide what to do next, but in general the options are:- Install the application if it supports both Intel SGX and non-Intel SGX code paths.
- Install the non-Intel SGX version of the application if it’s distributed as a separate binary.
- Abort the installation entirely if Intel SGX support is required and tell the user that the software is incompatible with the machine configuration.
- Run the PSW installer. If the installation succeeds, go to step 3. Otherwise, abort the installation.
- Call
sgx_cap_enable_device()
to enable Intel SGX, and then check the return result.- If the result is that Intel SGX is already enabled, no further action is needed.
- If enabling Intel SGX is successful but a reboot is required, prompt the user that a reboot is necessary to run the newly installed application.
- If enabling Intel SGX is successful but no reboot is required, no further action is needed.
- If enabling Intel SGX is unsuccessful, present an error to the user.
Note that the two functions sgx_is_capable()
and sgx_cap_enabled_device()
both require administrator privileges. Application installers generally require this level of permission, so it would not be unusual for the Intel SGX application installer to trigger a UAC prompt in Windows.
Because the installer containing these functions may run before the PSW has been installed, they are provided in standalone DLLs that are intended to be bundled with the installer program.
Disabled versus No Support
It is not possible to differentiate between the following three cases:
- Intel SGX is not supported by the CPU
- Intel SGX is not supported by the BIOS
- Intel SGX is supported by the BIOS and CPU, but explicitly disabled in the BIOS
The procedure for the installer should not change, however. In all three cases, sgx_is_capable()
will return a zero which indicates that the platform is not Intel SGX-capable.
Intel® Software Guard Extensions Detection: The Application
Detecting Intel SGX in an application at runtime is different from detecting it in an application installer. Once the application has been installed, one of four scenarios is possible:
- The system was not detected as Intel SGX-capable by the installer and thus does not have the Intel SGX Platform Software.
- The system was detected as being Intel SGX-capable by the installer and thus does have the Intel SGX Platform Software installed on it. The state of Intel SGX is either:
- Enabled, either via explicitly being enabled in the BIOS or via the software opt-in via the installer.
- Enabled pending the software opt-in, meaning the user needs to reboot their system before Intel SGX instructions can be executed.
- Disabled, meaning the user has explicitly disabled Intel SGX at some point after the application was installed.
No matter which of these cases applies, however, the correct procedure for applications to follow is shown in Figure 2.
Figure 2.Intel SGX feature detection flowchart for applications.
- Check to see whether the PSW has been installed. If it has, go to step 2. If not, Intel SGX is not available on the platform and the action it should take is dependent on the code paths in the application itself.
- If the application supports a non-Intel SGX code branch, it should execute the non-Intel SGX code.
- If the application is Intel SGX-only, it must exit.
- Call
gx_enable_device()
to ensure that the software opt-in has occurred, and check the return value.- If the result is that Intel SGX is enabled, the application may execute the Intel SGX code path.
- If Intel SGX could not be enabled, the application should fall back to the non-Intel SGX code path (if it has one), or exit (if it doesn’t).
- If the result is that a reboot or some other manual action is required, such as a BIOS change, the application should inform the user of what action needs to be taken. The application can continue with a non-Intel SGX code path if desired.
Dynamic loading versus dynamic linking
Applications with both Intel SGX and non-Intel SGX code paths in a single binary must dynamically load the Intel SGX libraries. Dynamic linking is not an option since systems that lack Intel SGX support will not have the PSW package with the necessary runtime libraries. Attempting to run a dynamically linked executable on a system without the PSW package will result in unresolved symbol errors that prevent the application from launching.
Note that the check for the PSW package that is described above is a dynamical load of the necessary shared libraries. The application can simply keep these handles open.
Functions
The enabling procedures make use of three functions from the Intel SGX SDK:
sgx_is_capable()
sgx_cap_enable_device()
sgx_enable_device()
These functions are described below. For more information, read the function reference in the Intel Software Guard Extensions SDK.
sgx_status_t sgx_is_capable (int *sgx_capable)
This function determines whether the system is capable of executing Intel SGX instructions under the current operating environment. The return value is type sgx_status_t
, and indicates success or failure of the inquiry.
Return values
If it returns SGX_SUCCESS
, the system was successfully queried for Intel SGX support and the result is stored in sgx_capable
. A return of SGX_ERROR_NO_PRIVILEGE
means the installer was not run with administrator privileges.
Any other return value means that the Intel SGX capability of the system could not be determined. In this case, application installers should be conservative and assume that the system is not Intel SGX capable.
Output parameters
A return value of SGX_SUCCESS
does not mean that the system is capable of supporting Intel SGX, only that it was able to definitively answer the question. To determine whether the system is Intel SGX-capable, you must examine the value of the sgx_capable
variable: a value of 1 means “yes,” and a value of 0 means “no.”
Notes
Because this function requires administrative privileges in order to access the EFI variables, this function is intended to be used only by application installers. If the function reports that the system is Intel SGX-capable, the installer should proceed with installing the PSW package. The PSW is required to be bundled with all Intel SGX-capable application installers.
Due to the administrator requirement, this function must not be called from applications.
Windows application installers need to include the DLL sgx_capable.dll
in their installation package.
sgx_status_t sgx_cap_enable_device (sgx_device_status_t *sgx_device_status)
This function attempts the software opt-in for Intel SGX and sets the final state of SGX in sgx_device_status
. The return value is type sgx_status_t
and indicates whether the system can attempt the software opt-in. The return value does not indicate whether the Intel SGX device itself was successfully enabled. That information is stored in sgx_device_status
only if the software opt-in was attempted.
This function requires administrator privileges because it must access the Software Control Interface made available via the BIOS. It is intended to be called by application installers, and executed after the PSW has been installed in order to enable Intel SGX.
Return values
If this function returns SGX_SUCCESS
, the software opt-in was attempted, and the success or failure of that attempt is stored in sgx_device_status
.
If the return value is SGX_ERROR_NO_PRIVILEGE
, the installer was not run with administrator privileges.
Any other return value means that the software opt-in could not be attempted on this system or in the current environment. In this case, application installers should be conservative and assume that Intel SGX cannot be enabled via the software opt-in on this system.
Output parameters
A return value of SGX_SUCCESS
does not mean that Intel SGX was successfully enabled, merely that the software opt-in was attempted. The value stored in sgx_device_status
must be checked next.
SGX_ENABLED
means that Intel SGX was already enabled.
SGX_DISABLED_REBOOT_REQUIRED
means that the software opt-in was successful, but a reboot is required for completion of Intel SGX enabling. Applications that attempt to detect Intel SGX usability at runtime will be told that Intel SGX is not available until the reboot occurs.
Any other value indicates that the software opt-in failed, and Intel SGX must be manually enabled by the user via the BIOS setup screen.
Notes
Because this function requires administrative privileges, it must not be run by applications.
Windows application installers need to include the DLL sgx_capable.dll
in their installation package.
sgx_status_t sgx_enable_device (sgx_device_status_t *sgx_device_status)
This function is similar to sgx_cap_enable_device()
, but it is intended to be run by applications instead of installers.
It does not require administrative privileges; it contacts the AE Service running on the local machine and asks that service to attempt the software opt-in. Note that this creates a dependency on the PSW: it must be installed on the system.
Applications run sgx_enable_device()
to ensure that Intel SGX is available for use, once they have verified that the PSW libraries are present on the system. The return values, and the Intel SGX device status, are nearly identical to sgx_cap_enable_device()
.
Because the PSW must be installed for this function to work, it must not be called by installers.
Windows* Code Samples
Two code samples are provided that implement these procedures as stubs: one for application installers and one for applications at runtime. The function that is responsible for the check, is_sgx_supported()
, has been wrapped in a Windows* console executable, for testing and convenience.
This function is prototyped as follows:
int is_sgx_supported(UINT *sgx_support)
;
It returns 1 if it could successfully determine the state of Intel SGX support on the system and 0 if it could not because of an error.
The state of Intel SGX support is placed in the variable sgx_support
, which is passed as a pointer to the function. It’s a combination of the following bits:
#define ST_SGX_UNSUPPORTED 0x0 #define ST_SGX_CAPABLE 0x1 #define ST_SGX_ENABLED 0x2 #define ST_SGX_REBOOT_REQUIRED 0x4 #define ST_SGX_BIOS_ENABLE_REQUIRED 0x8
A system is able to execute Intel SGX instructions if and only if the ST_SGX_ENABLED
bit is set. The other bits simply convey additional information about the state of Intel SGX support, if any, and what further action might be needed to enable it.
Wrapping Up
Answering the question, at runtime, of whether or not a system has been enabled for Intel SGX is complicated. The dependencies on the BIOS, the PSW, and the end-user’s own actions means that Intel SGX can be supported by a system, but not fully enabled or ready for use. Applications need to properly identify the state of Intel SGX support in order to ensure they execute the correct code path, and avoid both false positives and false negatives. The two procedures described here, one for application installers and one for applications, are the recommended methods of doing so.
Download the Windows* Code Sample
sgx_aes_ctr_encrypt counter size
The aes_ctr encrypt and decrypt functions expect the following counter parameters:
- uint8_t *p_ctr: Pointer to the counter block
- const uint32_t ctr_inc_bits: Number of bits in counter to be incremented
Regarding the counter size, two possibilities seem likely:
- The counter size is fixed. The documentation does not mention this.
- ctr_inc_bits is used both for the number of bits to increment, and as the ctr_len (i.e. all bits are incremented)
Regarding possibility 2, NIST SP 800-38A mentions methods of constructing counter blocks in which ctr_inc_bits
is not equal to ctr_len. For example in scenario 2, counter blocks with ctr_size=b are generated by using a random
nonce as the b/2 most significant bits, and incrementing only the b/2 least significant bits (ctr_inc_bits=b/2).
I think the following is necessary:
- The encrypt and decrypt functions should have an additional parameter ctr_size
- The documentation has to mention which ctr_inc_bits bits of the counter are incremented (most or least significant)
Sidenote: sgx_rijndael128GCM_encrypt also recieves an iv_len in addition to p_iv.
Introducing the Intel® Software Guard Extensions Tutorial Series
Today we are launching a multi-part tutorial series aimed at software developers who want to learn how to integrate Intel® Software Guard Extensions (Intel® SGX) into their applications. The intent of the series is to cover every aspect of the software development cycle when building an Intel SGX application, beginning at application design and running through development, testing, packaging, and deployment. While isolated code samples and individual articles are valuable, this in-depth look at enabling Intel SGX in a single application provides developers with a hands-on and holistic view of the technology as it is woven into a real-world application.
This tutorial will consist of several parts—currently 12 articles are planned, though the exact number may change—each covering a specific topic. While a precise schedule has not been set, each part in the series should be published every two to three weeks and in these broad phases:
- Concepts and design
- Application development and Intel SGX integration
- Validation and testing
- Packaging and deployment
- Disposition
Source code will accompany relevant sections of the series and will be distributed under the Intel Sample Source Code license. Don’t expect to start seeing source code for a few weeks, however. The first phase of the tutorial will cover the early fundamentals of Intel SGX application development.
Goals
At the end of the series, the developer will know how to:
- Identify an application’s secrets
- Apply the principles of enclave design
- Use trusted libraries in an enclave
- Build support for dual code paths in an application (to provide legacy support for platforms without Intel SGX capabilities)
- Use the Intel SGX debugger
- Create an Intel SGX application installer package
The sample application
Throughout the series we will be developing a basic password manager. The final product is not meant to be a commercially viable application, but rather one with sufficient functionality to make it a reasonable performer that follows smart security practice. This application is simple enough to be reasonably covered in the tutorial without being so simple that it’s not a useful example.
What you’ll need
Developers who want to work with the source code as it is released will require the following:
Hardware requirements
Hardware | Hard Requirement | Comments |
---|---|---|
Intel® processor with Intel® Secure Key technology | Yes | The password manager will make extensive use of the digital random number generator provided by Intel Secure Key technology. See http://ark.intel.com to find specific processor models with Intel Secure Key technology support. |
6th generation Intel® Core™ processor with Intel® Software Guard Extensions (Intel® SGX) enabled BIOS | No | To get the most out of the tutorial, a processor that supports Intel SGX is necessary, but the application development can take place on a lesser system and Intel SGX applications can be run in the simulator provided with the SDK. |
Software requirements
These software requirements are based on the current, public release of the Intel SGX Software Developer’s Kit (SDK). As newer versions of the SDK are released, the requirements may change.
Updated July 11, 2016: The SDK requirement has been updated to 1.6. This also forced the Microsoft Visual Studio* version to 2013.
Software | Hard Requirement | Comments |
---|---|---|
Intel® Software Guard Extensions (Intel® SGX) SDK v1.6 | Yes | Required for developing Intel SGX applications. |
Microsoft Visual Studio* 2013 Professional Edition | Yes | Required for the SDK. Each SDK release is tied to specific versions of Visual Studio in order to enable the wizards, developer tools, and various integration components. |
Intel® Parallel Studio XE 2013 Professional Edition for Windows* | No | This is recommended but it is not strictly necessary for Intel SGX development. |
Stay tuned
This series will cover every aspect of the software development cycle when building an Intel SGX application, beginning at application design, and running through development, testing, packaging, and deployment. The tutorials will cover concepts and design, application development and Intel SGX integration, validation and testing, packaging and deployment, and disposition.
We’re excited to be launching this series and are looking forward to having you join us!
Getting started
Part 1 of the series, Intel SGX Foundations, provides an overview of the technology and lays the groundwork for the rest of the tutorial.
How to encode the quote structure.
Hi,
I have got the quote structure by using sgx_get_quote() function, but I don't know how to encode the quote structure. It seems that we have to base64 encode the quote structure so that we can request the IAS api. So anyone can tell me how to encode quote?
Thanks,
Chen
Intel® Software Guard Extensions Tutorial Series: Part 1, Intel® SGX Foundation
The first part in the Intel® Software Guard Extensions (Intel® SGX) tutorial series is a brief overview of the technology. For more detailed information, see the documentation provided in the Intel Software Guard Extensions SDK. Find the list of all the tutorials in this series in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
Understanding Intel® Software Guard Extensions Technology
Software applications frequently need to work with private information such as passwords, account numbers, financial information, encryption keys, and health records. This sensitive data is intended to be accessed only by the designated recipient. In Intel SGX terminology, this private information is referred to as an application’s secrets.
The operating system’s job is to enforce security policy on the computer system so that these secrets are not unintentionally exposed to other users and applications. The OS will prevent a user from accessing another user’s files (unless permission to do so has been explicitly granted), one application from accessing another application’s memory, and an unprivileged user from access OS resources except through tightly controlled interfaces. Applications often employ additional safeguards, such as data encryption, to ensure that data sent to storage or over a network connection cannot be accessed by third parties even if the OS and hardware are compromised.
Despite these protections, there is still a significant vulnerability present in most computer systems: while there are numerous guards in place that protect one application from another, and the OS from an unprivileged user, an application has virtually no protection from processes running with higher privileges, including the OS itself. Malware that obtains administrative privileges has unrestricted access to all system resources and all applications running on the system. Sophisticated malware can target an application’s protection schemes to extract encryption keys and even the secret data itself directly from memory.
To enable the high-level protection of secrets and help defend against these software attacks, Intel designed Intel SGX. Intel SGX is a set of CPU instructions that enable applications to create enclaves: protected areas in the application’s address space that provide confidentiality and integrity even in the presence of privileged malware. Enclave code is enabled by using special instructions, and it is built and loaded as a Windows* Dynamic Link Library (DLL) file.
Intel SGX can reduce the attack surface of an application. Figure 1 demonstrates the dramatic difference between attack surfaces with and without the help of Intel SGX enclaves.
Figure 1: Attack-surface areas with and without Intel® Software Guard Extensions enclaves.
How Intel Software Guard Extensions Technology Helps Secure Data
Intel SGX offers the following protections from known hardware and software attacks:
- Enclave memory cannot be read or written from outside the enclave regardless of the current privilege level and CPU mode.
- Production enclaves cannot be debugged by software or hardware debuggers. (An enclave can be created with a debug attribute that allows a special debugger—the Intel SGX debugger—to view its content like a standard debugger. This is intended to aid the software development cycle.)
- The enclave environment cannot be entered through classic function calls, jumps, register manipulation, or stack manipulation. The only way to call an enclave function is through a new instruction that performs several protection checks.
- Enclave memory is encrypted using industry-standard encryption algorithms with replay protection. Tapping the memory or connecting the DRAM modules to another system will yield only encrypted data (see Figure 2).
- The memory encryption key randomly changes every power cycle (for example, at boot time, and when resuming from sleep and hibernation states). The key is stored within the CPU and is not accessible.
- Data isolated within enclaves can only be accessed by code that shares the enclave.
There is a hard limit on the size of the protected memory, set by the system BIOS, and typical values are 64 MB and 128 MB. Some system providers may make this limit a configurable option within their BIOS setup. Depending on the footprint of each enclave, you can expect that between 5 and 20 enclaves can simultaneously reside in memory.
Figure 2: How Intel® Software Guard Extensions helps secure enclave data in protected applications.
Design Considerations
Application design with Intel SGX requires that the application be divided into two components (see Figure 3):
- Trusted component. This is the enclave. The code that resides in the trusted code is the code that accesses an application’s secrets. An application can have more than one trusted component/enclave.
- Untrusted component. This is the rest of the application and any of its modules. It is important to note that, from the standpoint of an enclave, the OS and the VMM are considered untrusted components.
The trusted component should be as small as possible, limited to the data that needs the most protection and those operations that must act directly on it. A large enclave with a complex interface doesn’t just consume more protected memory: it also creates a larger attack surface.
Enclaves should also have minimal trusted-untrusted component interaction. While enclaves can leave the protected memory region and call functions in the untrusted component (through the use of a special instruction), limiting these dependencies will strengthen the enclave against attack.
Figure 3: Intel® Software Guard Extensions application execution flow.
Attestation
In the Intel SGX architecture, attestation refers to the process of demonstrating that a specific enclave was established on a platform. There are two attestation mechanisms:
- Local attestation occurs when two enclaves on the same platform authenticate to each other.
- Remote attestation occurs when an enclave gains the trust of a remote provider.
Local Attestation
Local attestation is useful when applications have more than one enclave that need to work together to accomplish a task or when two separate applications must communicate data between enclaves. Each enclave must verify the other in order to confirm that they are both trustworthy. Once that is done, they establish a protected session and use an ECDH Key Exchange to share a session key. That session key can be used to encrypt the data that must be shared between the two enclaves.
Because one enclave cannot access another enclave’s protected memory space, even when running under the same application, all pointers must be dereferenced to their values and copied, and the complete data set must be marshaled from one enclave to the other.
Remote Attestation
With remote attestation, a combination of Intel SGX software and platform hardware is used to generate a quote that is sent to a third-party server to establish trust. The software includes the application’s enclave, and the Quoting Enclave (QE) and Provisioning Enclave (PvE), both of which are provided by Intel. The attestation hardware is the Intel SGX-enabled CPU. A digest of the software information is combined with a platform-unique asymmetric key from the hardware to generate the quote, which is sent to a remote server over an authenticated channel. If the remote server determines that the enclave was properly instantiated and is running on a genuine Intel SGX-capable processor, it can now trust the enclave and choose to provision secrets to it over the authenticated channel.
Sealing Data
Sealing data is the process of encrypting it so that it can be written to untrusted memory or storage without revealing its contents. The data can be read back in by the enclave at a later date and unsealed (decrypted). The encryption keys are derived internally on demand and are not exposed to the enclave.
There are two methods of sealing data:
- Enclave Identity. This method produces a key that is unique to this exact enclave.
- Sealing Identity. This method produces a key that is based on the identity of the enclave’s Sealing Authority. Multiple enclaves from the same signing authority can derive the same key.
Sealing to the Enclave Identity
When sealing to the Enclave Identity, the key is unique to the particular enclave that sealed the data and any change to the enclave that impacts its signature will result in a new key. With this method, data sealed by one version of an enclave is inaccessible by other versions of the enclave, so a side effect of this approach is that sealed data cannot be migrated to newer versions of the application and its enclave. This is intended for applications where old, sealed data should not be used by newer versions of the application.
Sealing to the Sealing Identity
When sealing to the sealing identity, multiple enclaves from the same authority can transparently seal and unseal each other’s data. This allows data from one version of an enclave to be migrated to another, or to be shared among applications from the same software vendor.
If older versions of the software and enclave need to be prevented from accessing data that is sealed by newer application versions, the authority can choose to include a Software Version Number (SVN) when signing the enclave. Enclave versions older than the specified SVN will not be able to derive the sealing key and thus will be prevented from unsealing the data.
How We’ll Use Intel Software Guard Extensions Technology in the Tutorial
We’ve described the three key components of Intel SGX: enclaves, attestation, and sealing. For this tutorial, we’ll focus on implementing enclaves since they are at the core of Intel SGX. You can’t do attestation or sealing without establishing an enclave in the first place. This will also keep the tutorial to a manageable size.
Coming Up Next
Part 2 of the tutorial will focus on the password manager application that we’ll be building and enabling for Intel SGX. We’ll cover the design requirements, constraints, and the user interface. Stay tuned!
Intel® Software Guard Extensions Tutorial Series: Part 2, Application Design
The second part in the Intel® Software Guard Extensions (Intel® SGX) tutorial series is a high-level specification for the application we’ll be developing: a simple password manager. Since we’re building this application from the ground up, we have the luxury of designing for Intel SGX from the start. That means that in addition to laying out our application’s requirements, we’ll examine how Intel SGX design decisions and the overall application architecture influence one another.
Read the first tutorial in the series or find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
Password Managers At-A-Glance
Most people are probably familiar with password managers and what they do, but it’s a good idea to review the fundamentals before we get into the details of the application design itself.
The primary goals of a password manager are to:
- Reduce the number of passwords that end users need to remember.
- Enable end users to create stronger passwords than they would normally choose on their own.
- Make it practical to use a different password for every account.
Password management is a growing problem for Internet users, and numerous studies have tried to quantify the problem over the years. A Microsoft study published in 2007—nearly a decade ago as of this writing—estimated that the average person had 25 accounts that required passwords. More recently, in 2014 Dashlane estimated that their US users had an average of 130 accounts, while the number of accounts for their worldwide users averaged in the 90s. And the problems don’t end there: people are notoriously bad at picking “good” passwords, frequently reusing the same password on multiple sites, which has led to some spectacular attacks. These problems boil down to two basic issues: passwords that are hard for hacking tools to guess are often difficult for people to remember, and having a greater number of passwords makes this problem more complex by having to remember which password is associated with which account.
With a password manager, you only need to remember one very strong passphrase in order to gain access to your password database or vault. Once you have authenticated to your password manager, you can look up any passwords you have stored, and copy and paste them into authentication fields as needed. Of course, the key vulnerability of the password manager is the password database itself: since it contains all of the user’s passwords it is an attractive target for attackers. For this reason, the password database is encrypted with strong encryption techniques, and the user’s master passphrase becomes the means for decrypting the data inside of it.
Our goal in this tutorial is to build a simple password manager that provides the same core functions as a commercial product while following good security practices and use that as a learning vehicle for designing for Intel SGX. The tutorial password manager, which we’ll name the “Tutorial Password Manager with Intel® Software Guard Extensions” (yes, that’s a mouthful, but it’s descriptive), is not intended to function as a commercial product and certainly won’t contain all the safeguards found in one, but that level of detail is not necessary.
Basic Application Requirements
Some basic application requirements will help narrow down the scope of the application so that we can focus on the Intel SGX integration rather than the minutiae of application design and development. Again, the goal is not to create a commercial product: the Tutorial Password Manager with Intel SGX does not need to run on multiple operating systems or on all possible CPU architectures. All we require is a reasonable starting point.
To that end, our basic application requirements are:
The first requirement may seem strange given that this tutorial series is about Intel SGX application development, but real-world applications need to consider the legacy installation base. For some applications it may be appropriate to restrict execution only to Intel SGX-capable platforms, but for the Tutorial Password Manager we’ll use a less rigid approach. An Intel SGX-capable platform will receive a hardened execution environment, but non-capable platforms will still function. This usage is appropriate for a password manager, where the user may need to synchronize his or her password database with other, older systems. It is also a learning opportunity for implementing dual code paths.
The second requirement gives us access to certain cryptographic algorithms in the non-Intel SGX code path and to some libraries that we’ll need. The 64-bit requirement simplifies application development by ensuring access to native 64-bit types and also provides a performance boost for certain cryptographic algorithms that have been optimized for 64-bit code.
The third requirement gives us access to the RDRAND instruction in the non-Intel SGX code path. This greatly simplifies random number generation and ensures access to a high-quality entropy source. Systems that support the RDSEED instruction will make use of that as well. (For information on the RDRAND and RDSEED instructions, see the Intel® Digital Random Number Generator Software Implementation Guide.)
The fourth requirement keeps the list of software required by the developer (and the end user) as short as possible. No third-party libraries, frameworks, applications, or utilities need to be downloaded and installed. However, this requirement has an unfortunate side effect: without third-party frameworks, there are only four options available to us for creating the user interface. Those options are:
- Win32 APIs
- Microsoft Foundation Classes (MFC)
- Windows Presentation Foundation (WPF)
- Windows Forms
The first two are implemented in native/unmanaged code while the latter two require .NET*.
The User Interface Framework
For the Tutorial Password Manager, we’re going to be developing the GUI using Windows Presentation Foundation in C#. This design decision impacts our requirements as follows:
Why use WPF? Mostly because it simplifies the UI design while introducing complexity that we actually want. Specifically, by relying on the .NET Framework, we have the opportunity to discuss mixing managed code, and specifically high-level languages, with enclave code. Note, though, that choosing WPF over Windows Forms was arbitrary: either environment would work.
As you might recall, enclaves must be written in native C or C++ code, and the bridge functions that interact with the enclave must be native C (not C++) functions. While both Win32 APIs and MFC provide an opportunity to develop the password manager with 100-percent native C/C++ code, the burden imposed by these two methods does nothing for those who want to learn Intel SGX application development. With a GUI based in managed code, we not only reap the benefits of the integrated design tools but also have the opportunity to discuss something that is of potential value to Intel SGX application developers. In short, you aren’t here to learn MFC or raw Win32, but you might want to know how to glue .NET to enclaves.
To bridge the managed and unmanaged code we’ll be using C++/CLI (C++ modified for Common Language Infrastructure). This greatly simplifies the data marshaling and is so convenient and easy to use that many developers refer to it as IJW (“It Just Works”).
Figure 1: Minimum component structures for native and C# Intel® Software Guard Extensions applications.
Figure 1 shows the impact to an Intel SGX application’s minimum component makeup when it is moved from native code to C#. In the fully native application, the application layer can interact directly with the enclave DLL since the enclave bridge functions can be incorporated into the application’s executable. In a mixed-mode application, however, the enclave bridge functions need to be isolated from the managed code block because they are required to be 100-percent native code. The C# application, on the other hand, can’t interact with the bridge functions directly, and in the C++/CLI model that means creating another intermediary: a DLL that marshals data between the managed C# application and the native, enclave bridge DLL.
Password Vault Requirements
At the core of the password manager is the password database, or what we’ll be referring to as the password vault. This is the encrypted file that will hold the end user’s account information and passwords. The basic requirements for our tutorial application are:
The requirement that the vault be portable means that we should be able to copy the vault file to another computer and still be able to access its contents, whether or not they support Intel SGX. In other words, the user experience should be the same: the password manager should work seamlessly (so long as the system meets the base hardware and OS requirements, of course).
Encrypting the vault at rest means that the vault file should be encrypted when it is not actively in use. At a minimum, the vault must be encrypted on disk (without the portability requirement, we could potentially solve the encryption requirements by using the sealing feature of Intel SGX) and should not sit decrypted in memory longer than is necessary.
Authenticated encryption provides assurances that the encrypted vault has not been modified after the encryption has taken place. It also gives us a convenient means of validating the user’s passphrase: if the decryption key is incorrect, the decryption will fail when validating the authentication tag. That way, we don’t have to examine the decrypted data to see if it is correct.
Passwords
Any account information is sensitive information for a variety of reasons, not the least of which is that it tells an attacker exactly which logins and sites to target, but the passwords are arguably the most critical piece of the vault. Knowing what account to attack is not nearly as attractive as not needing to attack it at all. For this reason, we’ll introduce additional requirements on the passwords stored in the vault:
This is nesting the encryption. The passwords for each of the user’s accounts are encrypted when stored in the vault, and the entire vault is encrypted when written to disk. This approach allows us to limit the exposure of the passwords once the vault has been decrypted. It is reasonable to decrypt the vault as a whole so that the user can browse their account details, but displaying all of their passwords in clear text in this manner would be inappropriate.
An account password is only decrypted when a user asks to see it. This limits its exposure both in memory and on the user’s display.
Cryptographic Algorithms
With the encryption needs identified it is time to settle on the specific cryptographic algorithms, and it’s here that our existing application requirements impose some significant limits on our options. The Tutorial Password Manager must provide a seamless user experience on both Intel SGX and non-Intel SGX platforms, and it isn’t allowed to depend on third-party libraries. That means we have to choose an algorithm, and a supported key and authentication tag size, that is common to both the Windows CNG API and the Intel SGX trusted crypto library. Practically speaking, this leaves us with just one option: Advanced Encryption Standard-Galois Counter Mode (AES-GCM) with a 128-bit key. This is arguably not the best encryption mode to use in this application, especially since the effective authentication tag strength of 128-bit GCM is less than 128 bits, but it is sufficient for our purposes. Remember: the goal here is not to create a commercial product, but rather a useful learning vehicle for Intel SGX development.
With GCM come some other design decisions, namely the IV length (12 bytes is most efficient for the algorithm) and the authentication tag.
Encryption Keys and User Authentication
With the encryption method chosen, we can turn our attention to the encryption key and user authentication. How will the user authenticate to the password manager in order to unlock their vault?
The simple approach would be to derive the encryption key directly from the user’s passphrase or password using a key derivation function (KDF). But while the simple approach is a valid one, it does have one significant drawback: if the user changes his or her password, the encryption key changes along with it. Instead, we’ll follow the more common practice of encrypting the encryption key.
In this method, the primary encryption key is randomly generated using a high-quality entropy source and it never changes. The user’s passphrase or password is used to derive a secondary encryption key, and the secondary key is used to encrypt the primary key. This approach has some key advantages:
- The data does not have to be re-encrypted when the user’s password or passphrase changes
- The encryption key never changes, so it could theoretically be written down in, say, hexadecimal notation and locked in a physically secure location. The data could thus still be decrypted even if the user forgot his or her password. Since the key never changes, it would only have to be written down once.
- More than one user could, in theory, be granted access to the data. Each would encrypt a copy of the primary key with their own passphrase.
Not all of these are necessarily critical or relevant to the Tutorial Password Manager, but it’s a good security practice nonetheless.
Here the primary key is called the vault key, and the secondary key that is derived from the user’s passphrase is called the master key. The user authenticates by entering their passphrase, and the password manager derives a master key from it. If the master key successfully decrypts the vault key, the user is authenticated and the vault can be decrypted. If the passphrase is incorrect, the decryption of the vault key fails and that prevents the vault from being decrypted.
The final requirement, building the KDF around SHA-256, comes from the constraint that we find a hashing algorithm common to both the Windows CNG API and the Intel SGX trusted crypto library.
Account Details
The last of the high-level requirements is what actually gets stored in the vault. For this tutorial, we are going to keep things simple. Figure 2 shows an early mockup of the main UI screen.
Figure 2:Early mockup of the Tutorial Password Manager main screen.
The last requirement is all about simplifying the code. By fixing the number of accounts stored in the vault, we can more easily put an upper bound on how large the vault can be. This will be important when we start designing our enclave. Real-world password managers do not, of course, have this luxury, but it is one that can be afforded for the purposes of this tutorial.
Coming Up Next
In part 3 of the tutorial we’ll take a closer look at designing our Tutorial Password Manager for Intel SGX. We’ll identify our secrets, which portions of the application should be contained inside the enclave, how the enclave will interact with the core application, and how the enclave impacts the object model. Stay tuned!
Read the first tutorial in the series, Intel® Software Guard Extensions Tutorial Series: Part 1, Intel® SGX Foundation or find the list of all the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
how to use ocalls in SGX enclaves?
We are trying to use system calls inside an enclave.
For this, we defined a ocall functions, since sys-calls cannot be invoked directly from the Enclave.
Under the Application project(i.e. Non-Enclave code), we have ocalls.h/cpp. Also, under the Enclave, we got ocallsWrappers.h/cpp.
For the design, we followed this Link
When rebuilding the solution, we get 2 errors:
error LNK1120: 1 unresolved externals
error LNK2019: unresolved external symbol _ocall_read referenced in function _EnclaveExecBat_ocall_read
Which we couldn't resolve, as we worked according to the code example provided with the SDK by Intel.
We need a direction to implement the ocalls functions such that we can invoke them from within the enclave (even they are declared as 'untrusted' in the EDL - see bellow)
here is the content of our project (with code snippets):
======================= EDL: =======================
enclave { trusted { /* define ECALLs here. */ public void trusted_foo([in, size=len] char* password, size_t len); }; untrusted { /* define OCALLs here. */ void ocall_read([in, size=len] char* str, size_t len); }; };
======================= App.cpp: =======================
#include "stdafx.h" #include "sgx_urts.h" #include "EnclaveExecBat_u.h" #define ENCLAVE_FILE _T("EnclaveExecBat.signed.dll") using namespace std; int main(int argc, _TCHAR* argv[]) { sgx_enclave_id_t eid; sgx_status_t ret = SGX_SUCCESS; sgx_launch_token_t token = {0}; int updated = 0; // Create the Enclave with above launch token. ret = sgx_create_enclave(ENCLAVE_FILE, SGX_DEBUG_FLAG, &token, &updated, &eid, NULL); if (ret != SGX_SUCCESS) { printf("App: error %#x, failed to create enclave.\n", ret); return -1; } trusted_foo(eid, NULL, 0); // Destroy the enclave when all Enclave calls finished. if(SGX_SUCCESS != sgx_destroy_enclave(eid)) return -1; getchar(); return 0; }
======================= ocalls.h (under App project) =======================
//untrusted #ifndef __OCALLS #define __OCALLS void ocall_read(char* str, size_t len); #endif //__OCALLS
======================= ocalls.cpp (under App project) =======================
#include "stdafx.h" #include <stdio.h> #include "ocalls.h" void ocall_read(char* str, size_t len) { printf("in ocall_read\n"); }
======================= EnclaveExecBat.cpp: =======================
#include "EnclaveExecBat_t.h" #include "sgx_trts.h" #include "ocallsWrapper.h" void trusted_foo(char* password, size_t len) { read(NULL , 0); }
======================= ocallsWrapper.cpp: =======================
#include "ocallsWrapper.h" #include "EnclaveExecBat_t.h" #include "sgx_trts.h" void read(char* str, size_t len) { ocall_read(str, len); }
======================= ocallsWrapper.h =======================
#ifndef __OCALL_WRAPPER #define __OCALL_WRAPPER void read(char* str, size_t len); #endif //__OCALL_WRAPPER
----------------------------------
How can we adjust our untrusted code to be invoke-able from the enclave?
Thanks in advance.
Part 3 of the Intel® Software Guard Extensions Tutorial Series is Coming Soon
Part 3 of my Intel® Software Guard Extensions Tutorial Series is ready to go, but is being held up in a legal approval loop. I expect this to be resolved in the next week or two. I apologize for the delays. The good news is, this delay should only happen once.
I am still committed to this series, and am already at work on Part 4.
Using Enclaves from .NET: Making ECALLS with Callbacks via OCALLS
One question about Intel® Software Guard Extensions (Intel® SGX) that comes up frequently is how to mix enclaves with managed code on Microsoft Windows* platforms, particularly with the C# language. While enclaves themselves must be 100 percent native code and the enclave bridge functions must be 100 percent native code with C (and not C++) linkages, it is possible, indirectly, to make an ECALL into an enclave from .NET and to make an OCALL from an enclave into a .NET object. There are multiple solutions for accomplishing these tasks, and this article and its accompanying code sample demonstrate one approach.
Mixing Managed Code and Native Code with C++/CLI
Microsoft Visual Studio* 2005 and later offers three options for calling unmanaged code from managed code:
- Platform Invocation Services, commonly referred to by developers as P/Invoke
- COM
- C++/CLI
P/Invoke is good for calling simple C functions in a DLL, which makes it a reasonable choice for interfacing with enclaves, but writing P/Invoke wrappers and marshaling data can be difficult and error-prone. COM is more flexible than P/Invoke, but it is also more complicated; that additional complexity is unnecessary for interfacing with the C bridge functions required by enclaves. This code sample uses the C++/CLI approach.
C++/CLI offers significant convenience by allowing the developer to mix managed and unmanaged code in the same module, creating a mixed-mode assembly which can in turn be linked to modules comprised entirely of either managed or native code. Data marshaling in C++/CLI is also fairly easy: for simple data types it is done automatically through direct assignment, and helper methods are provided for more complex types such as arrays and strings. Data marshaling is, in fact, so painless in C++/CLI that developers often refer to the programming model as IJW (an acronym for “it just works”).
The trade-off for this convenience is that there can be a small performance penalty due to the extra layer of functions, and it does require that you produce an additional DLL when interfacing with Intel SGX enclaves.
Figure 1Minimum component makeup of an Intel® Software Guard Extensions application written in C# and C++/CLI.
Figure 1 illustrates the component makeup of a C# application when using the C++/CLI model. The managed application consists of, at minimum, a C# executable, a C++/CLI DLL, the native enclave bridge DLL, and the enclave DLL itself.
The Sample Application
The sample application provides two functions that execute inside of an enclave: one calls CPUID, and the other generates random data in 1KB blocks and XORs them together to produce a final 1KB block of random bytes. This is a multithreaded application, and you can run all three tasks simultaneously. The user interface is shown in Figure 2.
Figure 2:Sample application user interface.
To build the application you will need the Intel SGX SDK. This sample was created using the 1.6 Intel SGX SDK and built with Microsoft Visual Studio 2013. It targets the .NET framework 4.5.1.
The CPUID Tab
On the CPUID panel, you enter a value for EAX to pass to the CPUID instruction. When you click query, the program executes an ECALL on the current thread and runs the sgx_cpuid() function inside the enclave. Note that sgx_cpuid() does, in turn, make an OCALL to execute the CPUID instruction, since CPUID is not a legal instruction inside an enclave. This OCALL is automatically generated for you by the edgr8tr tool when you build your enclave. See the Intel SGX SDK Developer Guide for more information on the sgx_cpuid() function.
The RDRAND Tab
On the RDRAND panel you can generate up to two simultaneous background threads. Each thread performs the same task: it makes an ECALL to enter the enclave and generates the target amount of random data using the sgx_read_rand() function in 1 KB blocks. Each 1 KB block is XORd with the previous block to produce a final 1 KB block of random data that is returned to the application (the first block is XORd with a block of 0s).
For every 1 MB of random data that is generated, the function also executes an OCALL to send the progress back up to the main application via a callback. The callback function then runs a thread in the UI context to update the progress bar.
Because this function runs asynchronously, you can have both threads in the UI active at once and even switch to the CPUID tab to execute that ECALL while the RDRAND ECALLs are still active.
Overall Structure
The application is made up of the following components, three of which we’ll examine in detail:
- C# application. A Windows Forms*-based application that implements the user interface.
- EnclaveLink.dll. A mixed-mode DLL responsible for marshaling data between .NET and native code. This assembly contains two classes: EnclaveLinkManaged and EnclaveLinkNative.
- EnclaveBridge.dll. A native DLL containing the enclave bridge functions. These are pure C functions.
- Enclave.dll (Enclave.signed.dll). The Intel SGX enclave.
There is also a fifth component, sgx_support_detect.dll, which is responsible for the runtime check of Intel SGX capability. It ensures that the application exits gracefully when run on a system that does not support Intel SGX. We won’t be discussing this component here, but for more information on how it works and why it’s necessary, see the article Properly Detecting Intel® Software Guard Extensions in Your Applications.
The general application flow is that the enclave is not created immediately when the application launches. It initializes some global variables for referencing the enclave and creates a mutex. When a UI event occurs, the first thread that needs to run an enclave function checks to see if the enclave has already been created, and if not, it launches the enclave. All subsequent threads and events reuse that same enclave. In order to keep the sample application architecture relatively simple, the enclave is not destroyed until the program exists.
The C# Application
The main executable is written in C#. It requires a reference to the EnclaveLink DLL in order to execute the C/C++ methods that eventually call into the enclave.
On startup, the application calls static methods to prepare the application for the enclave, and then closes it on exit:
public FormMain() { InitializeComponent(); // This doesn't create the enclave, it just initializes what we need // to do so in an multithreaded environment. EnclaveLinkManaged.init_enclave(); } ~FormMain() { // Destroy the enclave (if we created it). EnclaveLinkManaged.close_enclave(); }
These two functions are simple wrappers around functions in EnclaveLinkNative and are discussed in more detail below.
When either the CPUID or RDRAND functions are executed via the GUI, the application creates an instance of class EnclaveLinkManaged and executes the appropriate method. The CPUID execution flow is shown, below:
private void buttonCPUID_Click(object sender, EventArgs e) { int rv; UInt32[] flags = new UInt32[4]; EnclaveLinkManaged enclave = new EnclaveLinkManaged(); // Query CPUID and get back an array of 4 32-bit unsigned integers rv = enclave.cpuid(Convert.ToInt32(textBoxLeaf.Text), flags); if (rv == 1) { textBoxEAX.Text = String.Format("{0:X8}", flags[0]); textBoxEBX.Text = String.Format("{0:X8}", flags[1]); textBoxECX.Text = String.Format("{0:X8}", flags[2]); textBoxEDX.Text = String.Format("{0:X8}", flags[3]); } else { MessageBox.Show("CPUID query failed"); } }
The callbacks for the progress bar in the RDRAND execution flow are implemented using a delegate, which creates a task in the UI context to update the display. The callback methodology is described in more detail later.
Boolean cancel = false; progress_callback callback; TaskScheduler uicontext; public ProgressRandom(int mb_in, int num_in) { enclave = new EnclaveLinkManaged(); mb = mb_in; num = num_in; uicontext = TaskScheduler.FromCurrentSynchronizationContext(); callback = new progress_callback(UpdateProgress); InitializeComponent(); labelTask.Text = String.Format("Generating {0} MB of random data", mb); } private int UpdateProgress(int received, int target) { Task.Factory.StartNew(() => { progressBarRand.Value = 100 * received / target; this.Text = String.Format("Thread {0}: {1}% complete", num, progressBarRand.Value); }, CancellationToken.None, TaskCreationOptions.None, uicontext); return (cancel) ? 0 : 1; }
The EnclaveLink DLL
The primary purpose of the EnclaveLink DLL is to marshal data between .NET and unmanaged code. It is a mixed-mode assembly that contains two objects:
- EnclaveLinkManaged, a managed class that is visible to the C# layer
- EnclaveLinkNative, a native C++ class
EnclaveLinkManaged contains all of the data marshaling functions, and its methods have variables in both managed and unmanaged memory. It ensures that only unmanaged pointers and data get passed to EnclaveLinkNative. Each instance of EnclaveLinkManaged contains an instance of EnclaveLinkNative, and the methods in EnclaveLinkManaged are essentially wrappers around the methods in the native class.
EnclaveLinkNative is responsible for interfacing with the enclave bridge functions in the EnclaveBridge DLL. It also is responsible for initializing the global enclave variables and handling the locking.
#define MUTEX L"Enclave" static sgx_enclave_id_t eid = 0; static sgx_launch_token_t token = { 0 }; static HANDLE hmutex; int launched = 0; void EnclaveLinkNative::init_enclave() { hmutex = CreateMutex(NULL, FALSE, MUTEX); } void EnclaveLinkNative::close_enclave() { if (WaitForSingleObject(hmutex, INFINITE) != WAIT_OBJECT_0) return; if (launched) en_destroy_enclave(eid); eid = 0; launched = 0; ReleaseMutex(hmutex); } int EnclaveLinkNative::get_enclave(sgx_enclave_id_t *id) { int rv = 1; int updated = 0; if (WaitForSingleObject(hmutex, INFINITE) != WAIT_OBJECT_0) return 0; if (launched) *id = eid; else { sgx_status_t status; status= en_create_enclave(&token, &eid, &updated); if (status == SGX_SUCCESS) { *id = eid; rv = 1; launched = 1; } else { rv= 0; launched = 0; } } ReleaseMutex(hmutex); return rv; }
The EnclaveBridge DLL
As the name suggests, this DLL holds the enclave bridge functions. This is a 100 percent native assembly with C linkages, and the methods from EnclaveLinkNative call into these functions. Essentially, they marshal data and wrap the calls in the mixed mode assembly to and from the enclave.
The OCALL and the Callback Sequence
The most complicated piece of the sample application is the callback sequence used by the RDRAND operation. The OCALL must propagate from the enclave all the way up the application to the C# layer. The task is to pass a reference to a managed class instance method (a delegate) down to the enclave so that it can be invoked via the OCALL. The challenge is to do that within the following restrictions:
- The enclave is in its own DLL, which cannot depend on other DLLs.
- The enclave only supports a limited set of data types.
- The enclave can only link against 100 percent native functions with C linkages.
- There cannot be any circular DLL dependencies.
- The methodology must be thread-safe.
- The user must be able to cancel the operation.
The Delegate
The delegate is prototyped inside of EnclaveLinkManaged.h along with the EnclaveLinkManaged class definition:
public delegate int progress_callback(int, int); public ref class EnclaveLinkManaged { array<BYTE> ^rand; EnclaveLinkNative *native; public: progress_callback ^callback; EnclaveLinkManaged(); ~EnclaveLinkManaged(); static void init_enclave(); static void close_enclave(); int cpuid(int leaf, array<UINT32>^ flags); String ^genrand(int mb, progress_callback ^cb); // C++/CLI doesn't support friend classes, so this is exposed publicly even though // it's only intended to be used by the EnclaveLinkNative class. int genrand_update(int generated, int target); };
When each ProgressRandom object is instantiated, a delegate is assigned in the variable callback, pointing to the UpdateProgress instance method:
public partial class ProgressRandom : Form { EnclaveLinkManaged enclave; int mb; Boolean cancel = false; progress_callback callback; TaskScheduler uicontext; int num; public ProgressRandom(int mb_in, int num_in) { enclave = new EnclaveLinkManaged(); mb = mb_in; num = num_in; uicontext = TaskScheduler.FromCurrentSynchronizationContext(); callback = new progress_callback(UpdateProgress); InitializeComponent(); labelTask.Text = String.Format("Generating {0} MB of random data", mb); }
This variable is passed as an argument to the EnclaveLinkManaged object when the RDRAND operation is requested:
public Task<String> RunAsync() { this.Refresh(); // Create a thread using Task.Run return Task.Run<String>(() => { String data; data= enclave.genrand(mb, callback); return data; }); }
The genrand() method inside of EnclaveLinkManaged saves this delegate to the property “callback”. It also creates a GCHandle that both points to itself and pins itself in memory, preventing the garbage collector from moving it in memory and thus making it accessible from unmanaged memory. This handle is passed as a pointer to the native object.
This is necessary because we cannot directly store a handle to a managed object as a member of an unmanaged class.
String ^EnclaveLinkManaged::genrand(int mb, progress_callback ^cb) { UInt32 rv; int kb= 1024*mb; String ^mshex = gcnew String(""); unsigned char *block; // Marshal a handle to the managed object to a system pointer that // the native layer can use. GCHandle handle= GCHandle::Alloc(this); IntPtr pointer= GCHandle::ToIntPtr(handle); callback = cb; block = new unsigned char[1024]; if (block == NULL) return mshex; // Call into the native layer. This will make the ECALL, which executes // callbacks via the OCALL. rv= (UInt32) native->genrand(kb, pointer.ToPointer(), block);
In the native object, we now have a pointer to the managed object, which we save in the member variable managed.
Next, we use a feature of C++11 to create a std::function reference that is bound to a class method. Unlike standard C function pointers, this std::function reference points to the class method in our instantiated object, not to a static or global function.
DWORD EnclaveLinkNative::genrand (int mkb, void *obj, unsigned char rbuffer[1024]) { using namespace std::placeholders; auto callback= std::bind(&EnclaveLinkNative::genrand_progress, this, _1, _2); sgx_status_t status; int rv; sgx_enclave_id_t thiseid; if (!get_enclave(&thiseid)) return 0; // Store the pointer to our managed object as a (void *). We'll Marshall this later. managed = obj; // Retry if we lose the enclave due to a power transition again: status= en_genrand(thiseid, &rv, mkb, callback, rbuffer);
Why do we need this layer of indirection? Because the next layer down, EnclaveBridge.dll, cannot have a linkage dependency on EnclaveLink.dll as this would create a circular reference (where A depends on B, and B depends on A). EnclaveBridge.dll needs an anonymous means of pointing to our instantiated class method.
Inside en_genrad() in EnclaveBridge.cpp, this std::function is converted to a void pointer. Enclaves only support a subset of data types, and they don’t support any of the C++11 extensions regardless. We need to convert the std::function pointer to something the enclave will accept. In this case, that means passing the pointer address in a generic data buffet. Why use void instead of an integer type? Because the size of a std::function pointer varies by architecture.
typedef std::function<int(int, int)> progress_callback_t; ENCLAVENATIVE_API sgx_status_t en_genrand(sgx_enclave_id_t eid, int *rv, int kb, progress_callback_t callback, unsigned char *rbuffer) { sgx_status_t status; size_t cbsize = sizeof(progress_callback_t); // Pass the callback pointer to the enclave as a 64-bit address value. status = e_genrand(eid, rv, kb, (void *)&callback, cbsize, rbuffer); return status; }
Note that we not only must allocate this data buffer, but also tell the edgr8r tool how large the buffer is. That means we need to pass the size of the buffer in as an argument, even though it is never explicitly used.
Inside the enclave, the callback parameter literally just gets passed through and out the OCALL. The definition in the EDL file looks like this:
enclave { from "sgx_tstdc.edl" import *; trusted { /* define ECALLs here. */ public int e_cpuid(int leaf, [out] uint32_t flags[4]); public int e_genrand(int kb, [in, size=sz] void *callback, size_t sz, [out, size=1024] unsigned char *block); }; untrusted { /* define OCALLs here. */ int o_genrand_progress ([in, size=sz] void *callback, size_t sz, int progress, int target); }; };
The callback starts unwinding in the OCALL, o_genrand_progress:
typedef std::function<int(int, int)> progress_callback_t; int o_genrand_progress(void *cbref, size_t sz, int progress, int target) { progress_callback_t *callback = (progress_callback_t *) cbref; // Recast as a pointer to our callback function. if (callback == NULL) return 1; // Propogate the cancellation condition back up the stack. return (*callback)(progress, target); }
The callback parameter, cbref, is recast as a std::function binding and then executed with our two arguments: progress and target. This points back to the genrand_progress() method inside of the EnclaveLinkNative object, where the GCHandle is recast to a managed object reference and then executed.
int __cdecl EnclaveLinkNative::genrand_progress (int generated, int target) { // Marshal a pointer to a managed object to native code and convert it to an object pointer we can use // from CLI code EnclaveLinkManaged ^mobj; IntPtr pointer(managed); GCHandle mhandle; mhandle= GCHandle::FromIntPtr(pointer); mobj= (EnclaveLinkManaged ^)mhandle.Target; // Call the progress update function in the Managed version of the object. A retval of 0 means // we should cancel our operation. return mobj->genrand_update(generated, target); }
The next stop is the managed object. Here, the delegate that was saved in the callback class member is used to call up to the C# method.
int EnclaveLinkManaged::genrand_update(int generated, int target) { return callback(generated, target); }
This executes the UpdateProgress() method, which updates the UI. This delegate returns an int value of either 0 or 1, which represents the status of the cancellation button:
private int UpdateProgress(int received, int target) { Task.Factory.StartNew(() => { progressBarRand.Value = 100 * received / target; this.Text = String.Format("Thread {0}: {1}% complete", num, progressBarRand.Value); }, CancellationToken.None, TaskCreationOptions.None, uicontext); return (cancel) ? 0 : 1; }
A return value of 0 means the user has asked to cancel the operation. This return code propagates back down the application layers into the enclave. The enclave code looks at the return value of the OCALL to determine whether or not to cancel:
// Make our callback. Be polite and only do this every MB. // (Assuming 1 KB = 1024 bytes, 1MB = 1024 KB) if (!(i % 1024)) { status = o_genrand_progress(&rv, callback, sz, i + 1, kb); // rv == 0 means we got a cancellation request if (status != SGX_SUCCESS || rv == 0) return i; }
Enclave Configuration
The default configuration for an enclave is to allow a single thread. As the sample application can run up to three threads in the enclave at one time—the CPUID function on the UI thread and the two RDRAND operations in background threads—the enclave configuration needed to be changed. This is done by setting the TCSNum parameter to 3 in Enclave.config.xml. If this parameter is left at its default of 1 only one thread can enter the enclave at a time, and simultaneous ECALLs will fail with the error code SGX_ERROR_OUT_OF_TCS.
<EnclaveConfiguration><ProdID>0</ProdID><ISVSVN>0</ISVSVN><StackMaxSize>0x40000</StackMaxSize><HeapMaxSize>0x100000</HeapMaxSize><TCSNum>3</TCSNum><TCSPolicy>1</TCSPolicy><DisableDebug>0</DisableDebug><MiscSelect>0</MiscSelect><MiscMask>0xFFFFFFFF</MiscMask></EnclaveConfiguration>
Summary
Mixing Intel SGX with managed code is not difficult, but it can involve a number of intermediate steps. The sample C# application presented in this article represents one of the more complicated cases: multiple DLLs, multiple threads originating from .NET, locking in native space, OCALLS, and UI updates based on enclave operations. It is intended to demonstrate the flexibility that application developers really have when working with Intel SGX, in spite of their restrictions.
Intel® Software Guard Extensions Tutorial Series: Part 3, Designing for Intel® SGX
In Part 3 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series we’ll talk about how to design an application with Intel SGX in mind. We’ll take the concepts that we reviewed in Part 1, and apply them to the high-level design of our sample application, the Tutorial Password Manager, laid out in Part 2. We’ll look at the overall structure of the application and how it is impacted by Intel SGX and create a class model that will prepare us for the enclave design and integration.
You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
While we won’t be coding up enclaves or enclave interfaces just yet, there is source code provided with this installment. The non-Intel SGX version of the application core, without its user interface, is available for download. It comes with a small test program, a console application written in C#, and a sample password vault file.
Designing for Enclaves
This is the general approach we’ll follow for designing the Tutorial Password Manager for Intel SGX:
- Identify the application’s secrets.
- Identify the providers and consumers of those secrets.
- Determine the enclave boundary.
- Tailor the application components for the enclave.
Identify the Application’s Secrets
The first step in designing an application for Intel SGX is to identify the application’s secrets.
A secret is anything that is not meant to be known or seen by others. Only the user or the application for which it is intended should have access to a secret, and it should not be exposed to others users or applications regardless of their privilege level. Potential secrets can include financial information, medical records, personally identifiable information, identity data, licensed media content, passwords, and encryption keys.
In the Tutorial Password Manager, there are several items that are immediately identifiable as secrets, shown in Table 1.
Secret
The user’s account passwords
The user’s account logins
The user’s master password or passphrase
The master key for the password vault
The encryption key for the account database
Table 1:Preliminary list of application secrets.
These are the obvious choices, but we’re going to expand this list by including all of the user’s account information and not just their logins. The revised list is shown in Table 2.
Secret
The user’s account passwords
The user’s account logins information
The user’s master password or passphrase
The master key for the password vault
The encryption key for the account database
Table 2: Revised list of application secrets.
Even without revealing the passwords, the account information is valuable to attackers. Exposing this data in the password manager leaks valuable clues to those with malicious intent. With this data, they can choose to launch attacks against the services themselves, perhaps using social engineering or password reset attacks, to obtain access to the owner’s account because they know exactly what to target.
Identify the Providers and Consumers of the Application’s Secrets
Once the application’s secrets have been identified, the next step is to determine their origins and destinations.
In the current version of Intel SGX, the enclave code is not encrypted, which means that anyone with access to the application files can disassemble and inspect it. By definition, something cannot be a secret if it is open to inspection, and that means that secrets should never be statically compiled into enclave code. An application’s secrets must originate from outside its enclaves and be loaded into them at runtime. In Intel SGX terminology, this is referred to as provisioning secrets into the enclave.
When a secret originates from a component outside of the Trusted Compute Base (TCB), it is important to minimize its exposure to untrusted code. (One of the main reasons why remote attestation is such a valuable component of Intel SGX is that it allows a service provider to establish a trusted relationship with an Intel SGX application, and then derive an encryption key that can be used to provision encrypted secrets to the application that only the trusted enclave on that client system can decrypt.) Similar care must be taken when a secret is exported out of an enclave. As a general rule, an application’s secrets should not be sent to untrusted code without first being encrypted inside of the enclave.
Unfortunately for the Tutorial Password Manager application, we do need to send secrets into and out of the enclave, and those secrets will have to exist in clear text at some point. The end user will be entering his or her account information and password via a keyboard or touchscreen, and recalling it at a future time as needed. Their account passwords will need to be shown on the screen, and even copied to the Windows* clipboard on request. These are core requirements for a password manager application to be useful.
What that means for us is that we can’t completely eliminate the attack surface: we can only minimize it, and we’ll need some mitigation strategy for dealing with secrets when they exist outside the enclave in plain text.
Secret | Source | Destination |
The user’s account passwords | User input* Password vault file | User interface* Clipboard* Password vault file |
The user’s account information | User input* Password vault file | User interface* Password vault file |
The user’s master password or passphrase | User input | Key derivation function |
The master key for the password vault | Key derivation function | Database key crypto |
The encryption key for the password database | Random generation Password vault file | Password vault crypto Password vault fil |
Table 3: Application secrets, their sources, and their destinations. Potential security risks are denoted with an asterisk (*).
Table 3 adds the sources and destinations for the Tutorial Password Manager’s secrets. Potential problems—areas where secrets may be exposed to untrusted code—are denoted with an asterisk (*).
Determine the Enclave Boundary
Once the secrets have been identified, it’s time to determine the boundary for the enclave. Start by looking at the data flow of secrets through the application’s core components. The enclave boundary should:
- Encompass the minimum set of critical components that act on your application’s secrets.
- Completely contain as many secrets as is feasible.
- Minimize the interactions with, and dependencies on, untrusted code.
The data flows and chosen enclave boundary for the Tutorial Password Manager application are shown in Figure 1.
Figure 1: Data flow for secrets in the Tutorial Password Manager.
Here, the application secrets are depicted as circles, with blue circles representing secrets that will exist in plain text (unencrypted) at some point during the application’s execution and green circles representing secrets that are encrypted by the application. The enclave boundary has been drawn around the encryption and decryption routines, the key derivation function (KDF) and the random number generator. This does several things for us:
- The database/vault key, which is used to encrypt some of our application’s secrets (account information and passwords), is generated within the enclave and is never sent outside of it in clear text.
- The master key is derived from the user’s passphrase inside the enclave, and used to encrypt and decrypt the database/vault key. The master key is ephemeral and is never sent outside the enclave in any form.
- The database/vault key, account information, and account passwords are encrypted inside the enclave using encryption keys that are not visible to untrusted code (see #1 and #2).
Unfortunately, we have issues with unencrypted secrets crossing the enclave boundary that we simply can’t avoid. At some point during the Tutorial Password Manager’s execution, a user will have to enter a password on the keyboard or copy a password to the Windows clipboard. These are insecure channels that can’t be placed inside the enclave, and the operations are absolutely necessary for the functioning of the application. This is potentially a huge problem, which is compounded by the decision to build the application on top of a managed code base.
Protecting Secrets Outside the Enclave
There are no complete solutions for securing unencrypted secrets outside the enclave, only mitigation strategies that reduce the attack surface. The best we can do is minimize the amount of time that this information exists in a form that is easily compromised.
Here is some general advice for handling sensitive data in untrusted code:
- Zero-fill your data buffers when you are done with them. Be sure to use functions such as SecureZeroMemory (Windows) and memzero_explicit (Linux) that are guaranteed to not be optimized out by the compiler.
- Do not use the C++ standard template library (STL) containers to store sensitive data. The STL containers have their own memory management, which makes it difficult to ensure that the memory allocated to an object is securely wiped when the object is deleted. (By using custom allocators you can address this issue for some containers.)
- When working with managed code such as .NET, or languages that feature automatic memory management, use storage types that are specifically designed for holding secure data. Other storage types are at the mercy of the garbage collector and just-in-time compilation, and may not be cleared or freed on demand (if at all).
- If you must place data on the clipboard be sure to clear it after a short length of time. In particular, don’t allow it to remain there after the application has exited.
For the Tutorial Password Manager project, we have to work with both native and managed code. In native code, we’ll allocate wchar_t and char buffers, and use SecureZeroMemory to wipe them clean before freeing them. In the managed code space, we’ll employ .NET’s SecureString class.
When sending a SecureString to unmanaged code, we’ll use the helper functions from System::Runtime::InteropServices to marshal the data.
using namespace System::Runtime::InteropServices; LPWSTR PasswordManagerCore::M_SecureString_to_LPWSTR(SecureString ^ss) { IntPtr wsp= IntPtr::Zero; if (!ss) return NULL; wsp = Marshal::SecureStringToGlobalAllocUnicode(ss); return (wchar_t *) wsp.ToPointer(); }
When marshaling data in the other direction, from native code to managed code, we have two methods. If the SecureString object already exists, we’ll use the Clear and AppendChar methods to set the new value from the wchar_t string.
password->Clear(); for (int i = 0; i < wpass_len; ++i) password->AppendChar(wpass[i]);
When creating a new SecureString object, we’ll use the constructor form that creates a SecureString from an existing wchar_t string.
try { name = gcnew SecureString(wname, (int) wcslen(wname)); login = gcnew SecureString(wlogin, (int) wcslen(wlogin)); url = gcnew SecureString(wurl, (int) wcslen(wurl)); } catch (...) { rv = NL_STATUS_ALLOC; }
Our password manager also supports transferring passwords to the Windows clipboard. The clipboard is an insecure storage space that can potentially be accessed by other users and for this reason Microsoft recommends that sensitive data never be placed on there. The point of a password manager, though, is to make it possible for users to create strong passwords that they do not have to remember. It also makes it possible to create lengthy passwords consisting of randomly generated characters which would be difficult to type by hand. The clipboard provides much needed convenience in exchange for some measure of risk.
To mitigate this risk, we need to take some extra precautions. The first is to ensure that the clipboard is emptied when the application exits. This is accomplished in the destructor in one of our native objects.
PasswordManagerCoreNative::~PasswordManagerCoreNative(void) { if (!OpenClipboard(NULL)) return; EmptyClipboard(); CloseClipboard(); }
We’ll also set up a clipboard timer. When a password is copied to the clipboard, set a timer for 15 seconds and execute a function to clear the clipboard when it fires. If a timer is already running, meaning a new password was placed on the clipboard before the old one was expired, that timer is cancelled and the new one takes its place.
void PasswordManagerCoreNative::start_clipboard_timer() { // Use the default Timer Queue // Stop any existing timer if (timer != NULL) DeleteTimerQueueTimer(NULL, timer, NULL); // Start a new timer if (!CreateTimerQueueTimer(&timer, NULL, (WAITORTIMERCALLBACK)clear_clipboard_proc, NULL, CLIPBOARD_CLEAR_SECS * 1000, 0, 0)) return; } static void CALLBACK clear_clipboard_proc(PVOID param, BOOLEAN fired) { if (!OpenClipboard(NULL)) return; EmptyClipboard(); CloseClipboard(); }
Tailor the Application Components for the Enclave
With the secrets identified and the enclave boundary drawn, it’s time to structure the application while taking the enclave into account. There are significant restrictions on what can be done inside of an enclave, and these restrictions will mandate which components live inside the enclave, which live outside of it, and when porting an existing applications, which ones may need to be split in two.
The biggest restriction that impacts the Tutorial Password Manager is that enclaves cannot perform any I/O operations. The enclave can’t read from the keyboard or write to the display so all of our secrets—passwords and account information—must be marshaled into and out of the enclave. It also can’t read from or write to the vault file: the components that parse the vault file must be separated from components that perform the physical I/O. That means we are going to have to marshal more than just our secrets across the enclave boundary: we have to marshal the file contents as well.
Figure 2:Class diagram for the Tutorial Password Manager.
Figure 2 shows the basic class diagram for the application core (excluding the user interface), including which classes serve as the sources and destinations for our secrets. Note that the PasswordManagerCore class is considered the source and destination for secrets which must interact with the GUI in this diagram for simplicity’s sake. Table 4 briefly describes each class and its purpose.
Class | Type | Function |
PasswordManagerCore | Managed | Interact with the C# graphical user interface (GUI) and marshal data to the native layer. |
PasswordManagerCoreNative | Native, Untrusted | Interact with the managed PasswordManagerCore class. Also responsible for converting between Unicode and multibyte character data (this will be discussed in more detail in Part 4). |
VaultFile | Managed | Reads and writes from the vault file. |
Vault | Native, Enclave | Stores the password vault data in AccountRecord members. Deserializes the vault file on reads, and reserializes it for writing. |
AccountRecord | Native, Enclave | Stores the account information and password for each account in the user’s password vault. |
Crypto | Native, Enclave | Performs cryptographic functions. |
DRNG | Native, Enclave | Interface to the random number generator. |
Table 4:Class descriptions.
Note that we had to split the handling of the vault file into two pieces: one that does the physical I/O, and one that stores its contents once they are read and parsed. We also had to add serialization and deserialization methods to the Vault object as intermediate sources and destinations for our secrets. All of this is necessary because the VaultFile class can’t know anything about the structure of the vault file itself, since that would require access to cryptographic functions that are located inside the enclave.
We’ve also drawn a dotted line when connecting the PasswordManagerCoreNative class to the Vault class. As you might recall from Part 2, enclaves can only link to C functions. These two C++ classes cannot directly communicate with one another: they must use an intermediary which is denoted by the Bridge Functions box.
The Non-Intel® Software Guard Extensions Code Path
The diagram in Figure 2 is for the Intel SGX code path. The PasswordManagerCoreNative class cannot link directly to the Vault class because the latter is inside the enclave. In the non-Intel SGX code path, however, there is no such restriction: PasswordManagerCoreNative can directly contain a member of class Vault. This is the only shortcut we’ll take in the application design for the non-Intel SGX code path. To simplify the enclave integration, the non-enclave code path will still separate the vault processing into the Vault and VaultFile classes.
Another key difference between the two code paths is that the cryptographic functions in the Intel SGX path will come from the Intel SGX SDK. The non-Intel SGX code path can’t use these functions, so they will draw upon Microsoft’s Cryptography Next Generation* API (CNG). That means we have to maintain two, distinct copies of the Crypto class: one for use in enclaves and one for use in untrusted space. We’ll have to do the same with the DRNG class, too, since the Intel SGX code path will call sgx_read_rand instead of using the RDRAND intrinsic.
Sample Code
As mentioned in the introduction, there is sample codeprovided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager core DLL, prior to enclave integration. In other words, this is the non-Intel SGX version of the application core. There is no user interface provided, but we have included a rudimentary test application written in C# that runs through a series of test operations. It executes two test suites: one that creates a new vault file and performs various operations on it, and one that acts on a reference vault file that is included with the source distribution. As written, the test application expects the test vault to be located in your Documents folder, though you can change this in the TestSetup class if needed.
This source code was developed in Microsoft Visual Studio* Professional 2013 per the requirements stated in the introduction to the tutorial series. It does not require the Intel SGX SDK at this point, though you will need a system that supports Intel® Data Protection Technology with Secure Key.
Coming Up Next
In part 4 of the tutorial we’ll develop the enclave and the bridge functions. Stay tuned!
Find the list of all the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
Intel Software Guard Extensions (Intel SGX) Tutorial Series: Looking ahead
Part 4 of the Intel Software Guard Extensions (Intel SGX) Tutorial Series will be coming out in the next few days. In it, we'll be starting our enclave implementation, focusing on the bridge/proxy functions for the enclave itself as well as the middleware layer needed for the C++ code to interact with it.
If you recall from the introduction, we are planning five broad phases in the series. With part 4 we complete our transition from the first phase, which focused on concepts and design, to the development and integration in the second. I want to take a few minutes to talk about what else is coming up and roughly where we are headed over the coming weeks.
- Part 5 will complete the development of the enclave. While part 4 is focused on the enclave interface layer and the enclave definition language (EDL), in part 5 we will code up the internals of enclave itself.
- In part 6, we'll add support for dual code paths so that the application runs on hardware that is both Intel SGX capable and incapable.
- In a change from our original plan for the series, part 7 will look at power events (specifically, suspend and resume) and its impact on enclaves.
- After that, we'll enter into the third phase of the tutorial which focuses on testing and validation. Here, we'll demonstrate that Intel SGX is providing the expected security benefits. We'll also look at tuning the enclave configuration to better match our usage.
- The final two phases, packaging and deployment, and disposition, will follow.
I should point out that these are all still plans and plans can change! The series is being developed as it's being released so we may find that topics need to be adjusted, added, or even dropped as we go. But for now, this is how things are shaping up.
Thank you for following along!
§
Intel® Software Guard Extensions Tutorial Series: Part 4, Enclave Design
In Part 4 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series we’ll be designing our enclave and its interface. We’ll take a look at the enclave boundary that was defined in Part 3 and identify the necessary bridge functions, examine the impact the bridge functions have on the object model, and create the project infrastructure necessary to integrate the enclave into our application. We’ll only be stubbing the enclave ECALLS at this point; full enclave integration will come in Part 5 of the series.
You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
There is source code provided with this installment of the series: the enclave stub and interface functions are provided for you to download.
Application Architecture
Before we jump into designing the enclave interface, we need to take a moment and consider the overall application architecture. As discussed in Part 1, enclaves are implemented as dynamically loaded libraries (DLLs under Windows* and shared libraries under Linux*) and they can only link against 100-percent native C code.
The Tutorial Password Manager, however, will have a GUI written in C#. It uses a mixed-mode assembly written in C++/CLI to get us from managed to unmanaged code, but while that assembly contains native code it is not a 100-percent native module and it cannot interface directly with an Intel SGX enclave. Attempts to incorporate the untrusted enclave bridge functions in C++/CLI assemblies will result in a fatal error:
Command line error D8045: cannot compile C file 'Enclave_u.c' with the /clr option
That means we need to place the untrusted bridge functions in a separate DLL that is all native code. As a result, our application will need to have, at minimum, three DLLs: the C++/CLI core, the enclave bridge, and the enclave itself. This structure is shown in Figure 1.
Figure 1. Component makeup for a mixed-mode application with enclaves.
Further Refinements
Since the enclave bridge functions must reside in a separate DLL, we’ll go a step further and place all the functions that deal directly with the enclave in that same DLL. This compartmentalization of the application layers will not only make it easier to manage (and debug) the program, but also to ease integration by lessening the impact to the other modules. When a class or module has a specific task with a clearly defined boundary, changes to other modules are less likely to impact it.
In this case, the PasswordManagerCoreNative class should not be burdened with the additional task of instantiating enclaves. It just needs to know whether or not Intel SGX is supported on the platform so that it can execute the appropriate function.
As an example, the following code block shows the unlock() method:
int PasswordManagerCoreNative::vault_unlock(const LPWSTR wpassphrase) { int rv; UINT16 size; char *mbpassphrase = tombs(wpassphrase, -1, &size); if (mbpassphrase == NULL) return NL_STATUS_ALLOC; rv= vault.unlock(mbpassphrase); SecureZeroMemory(mbpassphrase, size); delete[] mbpassphrase; return rv; }
This is a pretty simple method that takes the user’s passphrase as a wchar_t, converts it to a variable-length encoding (UTF-8), and then calls the unlock() method in the vault object. Rather than clutter up this class, and this method, with enclave-handling functions and logic, it would be best to add enclave support to this method through a one-line addition:
int PasswordManagerCoreNative::vault_unlock(const LPWSTR wpassphrase) { int rv; UINT16 size; char *mbpassphrase = tombs(wpassphrase, -1, &size); if (mbpassphrase == NULL) return NL_STATUS_ALLOC; // Call the enclave bridge function if we support Intel SGX if (supports_sgx()) rv = ew_unlock(mbpassphrase); else rv= vault.unlock(mbpassphrase); SecureZeroMemory(mbpassphrase, size); delete[] mbpassphrase; return rv; }
Our goal will be to put as little enclave awareness into this class as is feasible. The only other additions the PasswordManagerCoreNative class needs is a flag for Intel SGX support and methods to both set and get it.
class PASSWORDMANAGERCORE_API PasswordManagerCoreNative { int _supports_sgx; // Other class members ommitted for clarity protected: void set_sgx_support(void) { _supports_sgx = 1; } int supports_sgx(void) { return _supports_sgx; }
Designing the Enclave
Now that we have an overall application plan in place, it’s time to start designing the enclave and its interface. To do that, we return to the class diagram for the application core in Figure 2, which was first introduced in Part 3. The objects that will reside in the enclave are shaded in green while the untrusted components are shaded in blue.
Figure 2. Class diagram for the Tutorial Password Manager with Intel® Software Guard Extensions.
The enclave boundary only crosses one connection: the link between the PasswordManagerCoreNative object and the Vault object. That suggests that the majority of our ECALLs will simply be wrappers around the class methods in Vault. We’ll also need to add some additional ECALLs to manage the enclave infrastructure. One of the complications of enclave development is that the ECALLs, OCALLs, and bridge functions must be native C code, and we are making extensive use of C++ features. Once the enclave has been launched, we’ll also need functions that span the gap between C and C++ (objects, constructors, overloads, and others).
The wrapper and bridge functions will go in their own DLL, which we’ll name EnclaveBridge.dll. For clarity, we’ll prefix the wrapper functions with ew_ (for “enclave wrapper”), and the bridge functions that make the ECALLs with ve_ (for “vault enclave”).
Calls from PasswordManagerCoreNative to the corresponding method in Vault will follow the basic flow shown in Figure 3.
Figure 3. Execution flow for bridge functions and ECALLs.
The method in PasswordManagerCoreNative will call into the wrapper function in EnclaveBridge.dll. That wrapper will, in turn, invoke one or more ECALLs, which enter the enclave and invoke the corresponding class method in the Vault object. Once all ECALLs have completed, the wrapper function returns back to the calling method in PasswordManagerCoreNative and provides it with a return value.
Enclave Logistics
The first step in designing the enclave is working out a system for managing the enclave itself. The enclave must be launched and the resulting enclave ID must be provided to the ECALLs. Ideally, this should be transparent to the upper layers of the application.
The easiest solution for the Tutorial Password Manager is to use global variables in the EnclaveBridge DLL to hold the enclave information. This design decision comes with a restriction: only one thread can be active in the enclave at a time. This is a reasonable solution because the password manager application would not benefit from having multiple threads operating on the vault. Most of its actions are driven by the user interface and do not consume a significant amount of CPU time.
To solve the transparency problem, each wrapper function will first call a function to check to see if the enclave has been launched, and launch it if it hasn’t. This logic is fairly simple:
#define ENCLAVE_FILE _T("Enclave.signed.dll") static sgx_enclave_id_t enclaveId = 0; static sgx_launch_token_t launch_token = { 0 }; static int updated= 0; static int launched = 0; static sgx_status_t sgx_status= SGX_SUCCESS; // Ensure the enclave has been created/launched. static int get_enclave(sgx_enclave_id_t *eid) { if (launched) return 1; else return create_enclave(eid); } static int create_enclave(sgx_enclave_id_t *eid) { sgx_status = sgx_create_enclave(ENCLAVE_FILE, SGX_DEBUG_FLAG, &launch_token, &updated, &enclaveId, NULL); if (sgx_status == SGX_SUCCESS) { if ( eid != NULL ) *eid = enclaveId; launched = 1; return 1; } return 0; }
Each wrapper function will start by calling get_enclave(), which checks to see if the enclave has been launched by examining a static variable. If it has, then it (optionally) populates the eid pointer with the enclave ID. This step is optional because the enclave ID is also stored as a global variable, enclaveID, which can of course just be used directly.
What happens if an enclave is lost due to a power event or a bug that causes it to crash? For that, we check the return value of the ECALL: it indicates the success or failure of the ECALL operation itself, not of the function being called in the enclave.
sgx_status = ve_initialize(enclaveId, &vault_rv);
The return value of the function being called in the enclave, if any, is transferred via the pointer which is provided as the second argument to the ECALL (these function prototypes are generated for you automatically by the Edger8r tool). You must always check the return value of the ECALL itself. Any result other than SGX_SUCCESS indicates that the program did not successfully enter the enclave and the requested function did not run. (Note that we’ve defined sgx_status as a global variable as well. This is another simplification stemming from our single-threaded design.)
We’ll add a function that examines the error returned by the ECALL and checks for a lost or crashed enclave:
static int lost_enclave() { if (sgx_status == SGX_ERROR_ENCLAVE_LOST || sgx_status == SGX_ERROR_ENCLAVE_CRASHED) { launched = 0; return 1; } return 0; }
These are recoverable errors. The upper layers don’t currently have logic to deal with these specific conditions, but we provide it in the EnclaveBridge DLL in order to support future enhancements.
Also notice that there is no function provided to destroy the enclave. As long as the user has the password manager application open, the enclave is in place even if they choose to lock their vault. This is not good enclave etiquette. Enclaves draw from a finite pool of resources, even when idle. We’ll address this problem in a future segment of the series when we talk about data sealing.
The Enclave Definition Language
Before moving on to the actual enclave design, we’ll take a few moments to discuss the Enclave Definition Language (EDL) syntax. An enclave’s bridge functions, both its ECALLs and OCALLs, are prototyped in its EDL file and its general structure is as follows:
enclave { // Include files // Import other edl files // Data structure declarations to be used as parameters of the function prototypes in edl trusted { // Include file if any. It will be inserted in the trusted header file (enclave_t.h) // Trusted function prototypes (ECALLs) }; untrusted { // Include file if any. It will be inserted in the untrusted header file (enclave_u.h) // Untrusted function prototypes (OCALLs) }; };
ECALLs are prototyped in the trusted section, and OCALLs are prototyped in the untrusted section.
The EDL syntax is C-like and function prototypes very closely resemble C function prototypes, but it’s not identical. In particular, bridge function parameters and return values are limited to some fundamental data types and the EDL includes some additional keywords and syntax that defines some enclave behavior. The Intel® Software Guard Extensions (Intel® SGX) SDK User’s Guide explains the EDL syntax in great detail and includes a tutorial for creating a sample enclave. Rather than repeat all of that here, we’ll just discuss those elements of the language that are specific to our application.
When parameters are passed to enclave functions, they are marshaled into the protected memory space of the enclave. For parameters passed as values, no special action is required as the values are placed on the protected stack in the enclave just as they would be for any other function call. The situation is quite different for pointers, however.
For parameters passed as pointers, the data referenced by the pointer must be marshaled into and out of the enclave. The edge routines that perform this data marshalling need to know two things:
- Which direction should the data be copied: into the bridge function, out of the bridge function, or both directions?
- What is the size of the data buffer referenced by the pointer?
Pointer Direction
When providing a pointer parameter to a function, you must specify the direction by the keywords in brackets: [in]
, [out]
, or [in, out]
, respectively. Their meaning is given in Table 1.
Direction | ECALL | OCALL |
---|---|---|
in | The buffer is copied from the application into the enclave. Changes will only affect the buffer inside the enclave. | The buffer is copied from the enclave to the application. Changes will only affect the buffer outside the enclave. |
out | A buffer will be allocated inside the enclave and initialized with zeros. It will be copied to the original buffer when the ECALL exits. | A buffer will be allocated outside the enclave and initialized with zeros. This untrusted buffer will be copied to the original buffer in the enclave when the OCALL exits. |
in, out | Data is copied back and forth. | Same as ECALLs. |
Table 1. Pointer direction parameters and their meanings in ECALLs and OCALLs.
Note from the table that the direction is relative to the bridge function being called. For an ECALL, [in]
means “copy the buffer to the enclave,” but for an OCALL it’s “copy the buffer to the untrusted function.”
(There is also the option called user_check
that can be used in place of these, but it’s not relevant to our discussion. See the SDK documentation for information on its use and purpose.)
Buffer Size
The edge routines calculate the total buffer size, in bytes, as:
bytes = element_size * element_count
By default, the edge routines assume element_count = 1, and calculate element_size from the element referenced by the pointer parameter, e.g., for an integer pointer it assumes element_size is:
sizeof(int)
For a single element of a fixed data type, such as an int or a float, no additional information needs to be provided in the EDL prototype for the function. For a void pointer, you must specify an element size or you’ll get an error at compile time. For arrays, char and wchar_t strings, and other types where the length of the data buffer is more than one element you must specify the number of elements in the buffer or only one element will be copied.
Add either the count
or size
parameter (or both) to the bracketed keywords for the pointer as appropriate. They can be set to a constant value or one of the parameters to the function. For most cases, count
and size
are functionally the same, but it’s good practice to use them in their correct contexts. Strictly speaking, you would only specify size
when passing a void pointer. Everything else would use count
.
If you are passing a C string or wstring (a NULL-terminated char or wchar_t array), then you can use the string
or wstring
parameter in place of count
or size
. In this case, the edge routines will determine the size of the buffer by getting the length of the string directly.
function([in, size=12] void *param); function([in, count=len] char *buffer, uint32_t len); function([in, string] char *cstr);
Note that you can only use string
or wstring
if the direction is set to [in]
or [in, out]
. When the direction is set only to [out]
, the string has not yet been created so the edge routine can’t know the size of the buffer. Specifying [out, string]
will generate an error at compile time.
Wrapper and Bridge Functions
We are now ready to define our wrapper and bridge functions. As we pointed out above, the majority of our ECALLs will be wrappers around the class methods in Vault. The class definition for the public member functions is shown below:
class PASSWORDMANAGERCORE_API Vault { // Non-public methods and members ommitted for brevity public: Vault(); ~Vault(); int initialize(); int initialize(const char *header, UINT16 size); int load_vault(const char *edata); int get_header(unsigned char *header, UINT16 *size); int get_vault(unsigned char *edate, UINT32 *size); UINT32 get_db_size(); void lock(); int unlock(const char *password); int set_master_password(const char *password); int change_master_password(const char *oldpass, const char *newpass); int accounts_get_count(UINT32 *count); int accounts_get_info(UINT32 idx, char *mbname, UINT16 *mbname_len, char *mblogin, UINT16 *mblogin_len, char *mburl, UINT16 *mburl_len); int accounts_get_password(UINT32 idx, char **mbpass, UINT16 *mbpass_len); int accounts_set_info(UINT32 idx, const char *mbname, UINT16 mbname_len, const char *mblogin, UINT16 mblogin_len, const char *mburl, UINT16 mburl_len); int accounts_set_password(UINT32 idx, const char *mbpass, UINT16 mbpass_len); int accounts_generate_password(UINT16 length, UINT16 pwflags, char *cpass); int is_valid() { return _VST_IS_VALID(state); } int is_locked() { return ((state&_VST_LOCKED) == _VST_LOCKED) ? 1 : 0; } };
There are several problem functions in this class. Some of them are immediately obvious, such as the constructor, destructor, and the overloads for initialize(). These are C++ features that we must invoke using C functions. Some of the problems, though, are not immediately obvious because they stem from the function’s inherent design. (Some of these problem methods were poorly designed on purpose so that we could cover specific issues in this tutorial, but some were just poorly designed, period!) We’ll tackle each problem, one by one, presenting both the prototypes for the wrapper functions and the EDL prototypes for the proxy/bridge routines.
The Constructor and Destructor
In the non-Intel SGX code path, the Vault class is a member of PasswordManagerCoreNative. We can’t do this for the Intel SGX code path; however, the enclave can include C++ code so long as the bridge functions themselves are pure C functions.
Since we have already limited the enclave to a single thread, we can make the Vault class a static, global object in the enclave. This greatly simplifies our code and eliminates the need for creating bridge functions and logic to instantiate it.
The Overload on initialize()
There are two prototypes for the initialize() method:
- The method with no arguments initializes the Vault object for a new password vault with no contents. This is a password vault that the user is creating for the first time.
- The method with two arguments initializes the Vault object from the header of the vault file. This represents an existing password vault that the user is opening (and, later on, attempting to unlock).
This will be broken up into two wrapper functions:
ENCLAVEBRIDGE_API int ew_initialize(); ENCLAVEBRIDGE_API int ew_initialize_from_header(const char *header, uint16_t hsize);
And the corresponding ECALLs will be defined as:
public int ve_initialize (); public int ve_initialize_from_header ([in, count=len] unsigned char *header, uint16_t len);
get_header()
This method has a fundamental design issue. Here’s the prototype:
int get_header(unsigned char *header, uint16_t *size);
This function accomplishes one of two tasks:
- It gets the header block for the vault file and places it in the buffer pointed to by header. The caller must allocate enough memory to store this data.
- If you pass a NULL pointer in the header parameter, the uint16_t pointed to by size is set to the size of the header block, so that the caller knows how much memory to allocate.
This is a fairly common compaction technique in some programming circles, but it presents a problem for enclaves: when you pass a pointer to an ECALL or an OCALL, the edge functions copy the data referenced by the pointer into or out of the enclave (or both). Those edge functions need to know the size of the data buffer so they know how many bytes to copy. The first usage involves a valid pointer with a variable size which is not a problem, but the second usage has a NULL pointer and a size of zero.
We could probably come up with an EDL prototype for the ECALL that could make this work, but clarity should generally trump brevity. It’s better to split this into two ECALLs:
public int ve_get_header_size ([out] uint16_t *sz); public int ve_get_header ([out, count=len] unsigned char *header, uint16_t len);
The enclave wrapper function will take care of the necessary logic so that we don’t have to make changes to other classes:
ENCLAVEBRIDGE_API int ew_get_header(unsigned char *header, uint16_t *size) { int vault_rv; if (!get_enclave(NULL)) return NL_STATUS_SGXERROR; if ( header == NULL ) sgx_status = ve_get_header_size(enclaveId, &vault_rv, size); else sgx_status = ve_get_header(enclaveId, &vault_rv, header, *size); RETURN_SGXERROR_OR(vault_rv); }
accounts_get_info()
This method operates similarly to get_header(): pass a NULL pointer and it returns the size of the object in the corresponding parameter. However, it is uglier and sloppier because of the multiple parameter arguments. It is better off being broken up into two wrapper functions:
ENCLAVEBRIDGE_API int ew_accounts_get_info_sizes(uint32_t idx, uint16_t *mbname_sz, uint16_t *mblogin_sz, uint16_t *mburl_sz); ENCLAVEBRIDGE_API int ew_accounts_get_info(uint32_t idx, char *mbname, uint16_t mbname_sz, char *mblogin, uint16_t mblogin_sz, char *mburl, uint16_t mburl_sz);
And two corresponding ECALLs:
public int ve_accounts_get_info_sizes (uint32_t idx, [out] uint16_t *mbname_sz, [out] uint16_t *mblogin_sz, [out] uint16_t *mburl_sz); public int ve_accounts_get_info (uint32_t idx, [out, count=mbname_sz] char *mbname, uint16_t mbname_sz, [out, count=mblogin_sz] char *mblogin, uint16_t mblogin_sz, [out, count=mburl_sz] char *mburl, uint16_t mburl_sz );
accounts_get_password()
This is the worst offender of the lot. Here’s the prototype:
int accounts_get_password(UINT32 idx, char **mbpass, UINT16 *mbpass_len);
The first thing you’ll notice is that it passes a pointer to a pointer in mbpass. This method is allocating memory.
In general, this is not a good design. No other method in the Vault class allocates memory so it is internally inconsistent, and the API violates convention by not providing a method to free this memory on the caller’s behalf. It also poses a unique problem for enclaves: an enclave cannot allocate memory in untrusted space.
This could be handled in the wrapper function. It could allocate the memory and then make the ECALL and it would all be transparent to the caller, but we have to modify the method in the Vault class, regardless, so we should just fix this the correct way and make the corresponding changes to PasswordManagerCoreNative. The caller should be given two functions: one to get the password length and one to fetch the password, just as with the previous two examples. PasswordManagerCoreNative should be responsible for allocating the memory, not any of these functions (the non-Intel SGX code path should be changed, too).
ENCLAVEBRIDGE_API int ew_accounts_get_password_size(uint32_t idx, uint16_t *len); ENCLAVEBRIDGE_API int ew_accounts_get_password(uint32_t idx, char *mbpass, uint16_t len);
The EDL definition should look familiar by now:
public int ve_accounts_get_password_size (uint32_t idx, [out] uint16_t *mbpass_sz); public int ve_accounts_get_password (uint32_t idx, [out, count=mbpass_sz] char *mbpass, uint16_t mbpass_sz);
load_vault()
The problem with load_vault() is subtle. The prototype is fairly simple, and at first glance it may look completely innocuous:
int load_vault(const char *edata);
What this method does is load the encrypted, serialized password database into the Vault object. Because the Vault object has already read the header, it knows how large the incoming buffer will be.
The issue here is that the enclave’s edge functions don’t have this information. A length has to be explicitly given to the ECALL so that the edge function knows how many bytes to copy from the incoming buffer into the enclave’s internal buffer, but the size is stored inside the enclave. It’s not available to the edge function.
The wrapper function’s prototype can mirror the class method’s prototype, as follows:
ENCLAVEBRIDGE_API int ew_load_vault(const unsigned char *edata);
The ECALL, however, needs to pass the header size as a parameter so that it can be used to define the size of the incoming data buffer in the EDL file:
public int ve_load_vault ([in, count=len] unsigned char *edata, uint32_t len)
To keep this transparent to the caller, the wrapper function will be given extra logic. It will be responsible for fetching the vault size from the enclave and then passing it through as a parameter to this ECALL.
ENCLAVEBRIDGE_API int ew_load_vault(const unsigned char *edata) { int vault_rv; uint32_t dbsize; if (!get_enclave(NULL)) return NL_STATUS_SGXERROR; // We need to get the size of the password database before entering the enclave // to send the encrypted blob. sgx_status = ve_get_db_size(enclaveId, &dbsize); if (sgx_status == SGX_SUCCESS) { // Now we can send the encrypted vault data across. sgx_status = ve_load_vault(enclaveId, &vault_rv, (unsigned char *) edata, dbsize); } RETURN_SGXERROR_OR(vault_rv); }
A Few Words on Unicode
In Part 3, we mentioned that the PasswordManagerCoreNative class is also tasked with converting between wchar_t and char strings. Given that enclaves support the wchar_t data type, why do this at all?
This is a design decision intended to minimize our footprint. In Windows, the wchar_t data type is the native encoding for Win32 APIs and it stores UTF-16 encoded characters. In UTF-16, each character is 16 bits in order to support non-ASCII characters, particularly for languages that aren’t based on the Latin alphabet or have a large number of characters. The problem with UTF-16 is that a character is always 16-bits long, even when encoding plain ASCII text.
Rather than store twice as much data both on disk and inside the enclave for the common case where the user’s account information is in plain ASCII and incur the performance penalty of having to copy and encrypt those extra bytes, the Tutorial Password Manager converts all of the strings coming from .NET to the UTF-8 encoding. UTF-8 is a variable-length encoding, where each character is represented by one to four 8-bit bytes. It is backwards-compatible with ASCII and it results in a much more compact encoding than UTF-16 for plain ASCII text. There are cases where UTF-8 will result in longer strings than UTF-16, but for our tutorial password manager we’ll accept that tradeoff.
A commercial application would choose the best encoding for the user’s native language, and then record that encoding in the vault (so that it would know which encoding was used to create it in case the vault is opened on a system using a different native language).
Sample Code
As mentioned in the introduction, there is sample code provided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager bridge DLL and the enclave DLL. The enclave functions are just stubs at this point, and they will be filled out in Part 5.
Coming Up Next
In Part 5 of the tutorial we’ll complete the enclave by porting the Crypto, DRNG, and Vault classes to the enclave, and connecting them to the ECALLs. Stay tuned!
Intel® Software Guard Extensions Tutorial Series: Part 5, Enclave Development
In Part 5 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series, we’ll finish developing the enclave for the Tutorial Password Manager application. In Part 4 of the series, we created a DLL to serve as our interface layer between the enclave bridge functions and the C++/CLI program core, and defined our enclave interface. With those components in place, we can now focus our attention on the enclave itself.
You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
There is source code provided with this installment of the series: the completed application with its enclave. This version is hardcoded to run the Intel SGX code path.
The Enclave Components
To identify which components need to be implemented within the enclave, we’ll refer to the class diagram for the application core in Figure 1, which was first introduced in Part 3. As before, the objects that will reside in the enclave are shaded in green while the untrusted components are shaded in blue.
Figure 1. Class diagram for the Tutorial Password Manager with Intel® Software Guard Extensions.
From this we can identify four classes that need to be ported:
- Vault
- AccountRecord
- Crypto
- DRNG
Before we get started, however, we do need to make a design decision. Our application must function on systems both with and without Intel SGX support, and that means we can’t simply convert our existing classes so that they function within the enclave. We must create two versions of each: one intended for use in enclaves, and one for use in untrusted memory. The question is, how should this dual-support be implemented?
Option 1: Conditional Compilation
The first option is to implement both the enclave and untrusted functionality in the same source module and use preprocessor definitions and #ifdef
statements to compile the appropriate code based on the context. The advantage of this approach is that we only need one source file for each class, and thus do not have to maintain changes in two places. The disadvantages are that the code can be more difficult to read, particularly if the changes between the two versions are numerous or significant, and the project structure will be more complex. Two of our Visual Studio* projects, Enclave
and PasswordManagerCore
, will share source files, and each will need to set a preprocessor symbol to ensure that the correct source code is compiled.
Option 2: Separate Classes
The second option is to duplicate each source file that has to go into the enclave. The advantages of this approach are that the enclave has its own copy of the source files which we can modify directly, allowing for a simpler project structure and easier code view. But, these come at a cost: if we need to make changes to the classes, those changes must be made in two places, even if those changes are common to both the enclave and untrusted versions.
Option 3: Inheritance
The third option is to use the C++ feature of class inheritance. The functions common to both versions of the class would be implemented in the base class, and the derived classes would implement the branch-specific methods. The big advantage to this approach is that it is a very natural and elegant solution to the problem, using a feature of the language that is designed to do exactly what we need. The disadvantages are the added complexity required in both the project structure and the code itself.
There is no hard and fast rule here, and the decision does not have to be a global one. A good rule of thumb is that Option 1 is best for modules where the changes are small or easily compartmentalized, and Options 2 and 3 are best when the changes are significant or result in source code that is difficult to read and maintain. However, it really comes down to style and preference, and either approach is fine.
For now, we’ll choose Option 2 because it allows for easy side-by-side comparisons of the enclave and untrusted source files. In a future installment of the tutorial series we may switch to Option 3 in order to tighten up the code.
The Enclave Classes
Each class has its own set of issues and challenges when it comes to adapting it to the enclave, but there is one universal truth that will apply to all of them: we no longer have to zero-fill our memory before freeing it. As you recall from Part 3, this was a recommended action when handling secure data in untrusted memory. Because our enclave memory is encrypted by the CPU, using an encryption key that is not available to any hardware layer, the contents of freed memory will contain what appears to be random data to other applications. This means we can remove all calls to SecureZeroMemory that are inside the enclave.
The Vault Class
The Vault class is our interface to the password vault operations. All of our bridge functions act through one or more methods in Vault. Its declaration from Vault.h
is shown below.
class PASSWORDMANAGERCORE_API Vault { Crypto crypto; char m_pw_salt[8]; char db_key_nonce[12]; char db_key_tag[16]; char db_key_enc[16]; char db_key_obs[16]; char db_key_xor[16]; UINT16 db_version; UINT32 db_size; // Use get_db_size() to fetch this value so it gets updated as needed char db_data_nonce[12]; char db_data_tag[16]; char *db_data; UINT32 state; // Cache the number of defined accounts so that the GUI doesn't have to fetch // "empty" account info unnecessarily. UINT32 naccounts; AccountRecord accounts[MAX_ACCOUNTS]; void clear(); void clear_account_info(); void update_db_size(); void get_db_key(char key[16]); void set_db_key(const char key[16]); public: Vault(); ~Vault(); int initialize(); int initialize(const unsigned char *header, UINT16 size); int load_vault(const unsigned char *edata); int get_header(unsigned char *header, UINT16 *size); int get_vault(unsigned char *edata, UINT32 *size); UINT32 get_db_size(); void lock(); int unlock(const char *password); int set_master_password(const char *password); int change_master_password(const char *oldpass, const char *newpass); int accounts_get_count(UINT32 *count); int accounts_get_info_sizes(UINT32 idx, UINT16 *mbname_sz, UINT16 *mblogin_sz, UINT16 *mburl_sz); int accounts_get_info(UINT32 idx, char *mbname, UINT16 mbname_sz, char *mblogin, UINT16 mblogin_sz, char *mburl, UINT16 mburl_sz); int accounts_get_password_size(UINT32 idx, UINT16 *mbpass_sz); int accounts_get_password(UINT32 idx, char *mbpass, UINT16 mbpass_sz); int accounts_set_info(UINT32 idx, const char *mbname, UINT16 mbname_len, const char *mblogin, UINT16 mblogin_len, const char *mburl, UINT16 mburl_len); int accounts_set_password(UINT32 idx, const char *mbpass, UINT16 mbpass_len); int accounts_generate_password(UINT16 length, UINT16 pwflags, char *cpass); int is_valid() { return _VST_IS_VALID(state); } int is_locked() { return ((state&_VST_LOCKED) == _VST_LOCKED) ? 1 : 0; } };
The declaration for the enclave version of this class, which we’ll call E_Vault for clarity, will be identical except for one crucial change: database key handling.
In the untrusted code path, the Vault object must store the database key, decrypted, in memory. Every time we make a change to our password vault we have to encrypt the updated vault data and write it to disk, and that means the key must be at our disposal. We have four options:
- Prompt the user for their master password on every change so that the database key can be derived on demand.
- Cache the user’s master password so that the database key can be derived on demand without user intervention.
- Encrypt, encode, and/or obscure the database key in memory.
- Store the key in the clear.
None of these are good solutions and they highlight the need for technologies like Intel SGX. The first is arguably the most secure, but no user would want to run an application that behaved in this manner. The second could be achieved using the SecureString class in .NET*, but it is still vulnerable to inspection via a debugger and there is a performance cost associated with the key derivation function that a user might find unacceptable. The third option is effectively insecure as the second, only it comes without a performance penalty. The fourth option is the worst of the lot.
Our Tutorial Password Manager uses the third option: the database key is XOR’d with a 128-bit value that is randomly generated when a vault file is opened, and it is stored in memory only in this XOR’d form. This is effectively a one-time pad encryption scheme. It is open to inspection for anyone running a debugger, but it does limit the amount of time in which the database key is present in memory in the clear.
void Vault::set_db_key(const char db_key[16]) { UINT i, j; for (i = 0; i < 4; ++i) for (j = 0; j < 4; ++j) db_key_obs[4 * i + j] = db_key[4 * i + j] ^ db_key_xor[4 * i + j]; } void Vault::get_db_key(char db_key[16]) { UINT i, j; for (i = 0; i < 4; ++i) for (j = 0; j < 4; ++j) db_key[4 * i + j] = db_key_obs[4 * i + j] ^ db_key_xor[4 * i + j]; }
This is obviously security through obscurity, and since we are publishing the source code, it’s not even particularly obscure. We could choose a better algorithm or go to greater lengths to hide both the database key and the pad’s secret key (including how they are stored in memory); but in the end, the method we choose would still be vulnerable to inspection via a debugger, and the algorithm would still be published for anyone to see.
Inside the enclave, however, this problem goes away. The memory is protected by hardware-backed encryption, so even when the database key is decrypted it is not open to inspection by anyone, even a process running with elevated privileges. As a result, we no longer need these class members or methods:
char db_key_obs[16]; char db_key_xor[16]; void get_db_key(char key[16]); void set_db_key(const char key[16]);
We can replace them with just one class member: a char array to hold the database key.
char db_key[16];
The AccountInfo Class
The account data is stored in a fixed-size array of AccountInfo objects as a member of the Vault object. The declaration for AccountInfo is also found in Vault.h
, and it is shown below:
class PASSWORDMANAGERCORE_API AccountRecord { char nonce[12]; char tag[16]; // Store these in their multibyte form. There's no sense in translating // them back to wchar_t since they have to be passed in and out as // char * anyway. char *name; char *login; char *url; char *epass; UINT16 epass_len; // Can't rely on NULL termination! It's an encrypted string. int set_field(char **field, const char *value, UINT16 len); void zero_free_field(char *field, UINT16 len); public: AccountRecord(); ~AccountRecord(); void set_nonce(const char *in) { memcpy(nonce, in, 12); } void set_tag(const char *in) { memcpy(tag, in, 16); } int set_enc_pass(const char *in, UINT16 len); int set_name(const char *in, UINT16 len) { return set_field(&name, in, len); } int set_login(const char *in, UINT16 len) { return set_field(&login, in, len); } int set_url(const char *in, UINT16 len) { return set_field(&url, in, len); } const char *get_epass() { return (epass == NULL)? "" : (const char *)epass; } const char *get_name() { return (name == NULL) ? "" : (const char *)name; } const char *get_login() { return (login == NULL) ? "" : (const char *)login; } const char *get_url() { return (url == NULL) ? "" : (const char *)url; } const char *get_nonce() { return (const char *)nonce; } const char *get_tag() { return (const char *)tag; } UINT16 get_name_len() { return (name == NULL) ? 0 : (UINT16)strlen(name); } UINT16 get_login_len() { return (login == NULL) ? 0 : (UINT16)strlen(login); } UINT16 get_url_len() { return (url == NULL) ? 0 : (UINT16)strlen(url); } UINT16 get_epass_len() { return (epass == NULL) ? 0 : epass_len; } void clear(); };
We actually don’t need to do anything to this class for it to work inside the enclave. Other than remove the unnecessary calls to SecureZeroFree, this class is fine as is. However, we are going to change it anyway in order to illustrate a point: within the enclave, we gain some flexibility that we did not have before.
Returning to Part 3, another of our guidelines for securing data in untrusted memory space was avoiding container classes that manage their own memory, specifically the Standard Template Library’s std::string class. Inside the enclave this problem goes away, too. For the same reason that we don’t need to zero-fill our memory before freeing it, we don’t have to worry about how the Standard Template Library (STL) containers manager their memory. The enclave memory is encrypted, so even if fragments of our secure data remain there as a result of container operations, they can’t be inspected by other processes.
There’s also a good reason to use the std::string class inside the enclave: reliability. The code behind the STL containers has been through significant peer review over the years and it can be argued that it is safer to use it than implement our own high-level string functions when given the choice. For simple code like what’s in the AccountInfo class, it’s probably not a significant issue, but in more complex programs this can be a huge benefit. However, this does come at the cost of a larger DLL due to the added STL code.
The new class declaration, which we’ll call E_AccountInfo, is shown below:
#define TRY_ASSIGN(x) try{x.assign(in,len);} catch(...){return 0;} return 1 class E_AccountRecord { char nonce[12]; char tag[16]; // Store these in their multibyte form. There's no sense in translating // them back to wchar_t since they have to be passed in and out as // char * anyway. string name, login, url, epass; public: E_AccountRecord(); ~E_AccountRecord(); void set_nonce(const char *in) { memcpy(nonce, in, 12); } void set_tag(const char *in) { memcpy(tag, in, 16); } int set_enc_pass(const char *in, uint16_t len) { TRY_ASSIGN(epass); } int set_name(const char *in, uint16_t len) { TRY_ASSIGN(name); } int set_login(const char *in, uint16_t len) { TRY_ASSIGN(login); } int set_url(const char *in, uint16_t len) { TRY_ASSIGN(url); } const char *get_epass() { return epass.c_str(); } const char *get_name() { return name.c_str(); } const char *get_login() { return login.c_str(); } const char *get_url() { return url.c_str(); } const char *get_nonce() { return (const char *)nonce; } const char *get_tag() { return (const char *)tag; } uint16_t get_name_len() { return (uint16_t) name.length(); } uint16_t get_login_len() { return (uint16_t) login.length(); } uint16_t get_url_len() { return (uint16_t) url.length(); } uint16_t get_epass_len() { return (uint16_t) epass.length(); } void clear(); };
The tag and nonce members are still stored as char arrays. Our password encryption is done with AES in GCM mode, using a 128-bit key, a 96-bit nonce, and a 128-bit authentication tag. Since the size of the nonce and the tag are fixed there is no reason to store them as anything other than simple char arrays.
Note that this std::string-based approach has allowed us to almost completely define the class in the header file.
The Crypto Class
The Crypto class provides our cryptographic functions. The class declaration is shown below.
class PASSWORDMANAGERCORE_API Crypto { DRNG drng; crypto_status_t aes_init (BCRYPT_ALG_HANDLE *halgo, LPCWSTR algo_id, PBYTE chaining_mode, DWORD chaining_mode_len, BCRYPT_KEY_HANDLE *hkey, PBYTE key, ULONG key_len); void aes_close (BCRYPT_ALG_HANDLE *halgo, BCRYPT_KEY_HANDLE *hkey); crypto_status_t aes_128_gcm_encrypt(PBYTE key, PBYTE nonce, ULONG nonce_len, PBYTE pt, DWORD pt_len, PBYTE ct, DWORD ct_sz, PBYTE tag, DWORD tag_len); crypto_status_t aes_128_gcm_decrypt(PBYTE key, PBYTE nonce, ULONG nonce_len, PBYTE ct, DWORD ct_len, PBYTE pt, DWORD pt_sz, PBYTE tag, DWORD tag_len); crypto_status_t sha256_multi (PBYTE *messages, ULONG *lengths, BYTE hash[32]); public: Crypto(void); ~Crypto(void); crypto_status_t generate_database_key (BYTE key_out[16], GenerateDatabaseKeyCallback callback); crypto_status_t generate_salt (BYTE salt[8]); crypto_status_t generate_salt_ex (PBYTE salt, ULONG salt_len); crypto_status_t generate_nonce_gcm (BYTE nonce[12]); crypto_status_t unlock_vault(PBYTE passphrase, ULONG passphrase_len, BYTE salt[8], BYTE db_key_ct[16], BYTE db_key_iv[12], BYTE db_key_tag[16], BYTE db_key_pt[16]); crypto_status_t derive_master_key (PBYTE passphrase, ULONG passphrase_len, BYTE salt[8], BYTE mkey[16]); crypto_status_t derive_master_key_ex (PBYTE passphrase, ULONG passphrase_len, PBYTE salt, ULONG salt_len, ULONG iterations, BYTE mkey[16]); crypto_status_t validate_passphrase(PBYTE passphrase, ULONG passphrase_len, BYTE salt[8], BYTE db_key[16], BYTE db_iv[12], BYTE db_tag[16]); crypto_status_t validate_passphrase_ex(PBYTE passphrase, ULONG passphrase_len, PBYTE salt, ULONG salt_len, ULONG iterations, BYTE db_key[16], BYTE db_iv[12], BYTE db_tag[16]); crypto_status_t encrypt_database_key (BYTE master_key[16], BYTE db_key_pt[16], BYTE db_key_ct[16], BYTE iv[12], BYTE tag[16], DWORD flags= 0); crypto_status_t decrypt_database_key (BYTE master_key[16], BYTE db_key_ct[16], BYTE iv[12], BYTE tag[16], BYTE db_key_pt[16]); crypto_status_t encrypt_account_password (BYTE db_key[16], PBYTE password_pt, ULONG password_len, PBYTE password_ct, BYTE iv[12], BYTE tag[16], DWORD flags= 0); crypto_status_t decrypt_account_password (BYTE db_key[16], PBYTE password_ct, ULONG password_len, BYTE iv[12], BYTE tag[16], PBYTE password); crypto_status_t encrypt_database (BYTE db_key[16], PBYTE db_serialized, ULONG db_size, PBYTE db_ct, BYTE iv[12], BYTE tag[16], DWORD flags= 0); crypto_status_t decrypt_database (BYTE db_key[16], PBYTE db_ct, ULONG db_size, BYTE iv[12], BYTE tag[16], PBYTE db_serialized); crypto_status_t generate_password(PBYTE buffer, USHORT buffer_len, USHORT flags); };
The public methods in this class are modeled to perform various high-level vault operations: unlock_vault, derive_master_key, validate_passphrase, encrypt_database, and so on. Each of these methods invokes one or more cryptographic algorithms in order to complete its task. For example, the unlock_vault method takes the passphrase supplied by the user, runs it through the SHA-256-based key derivation function, and uses the resulting key to decrypt the database key using AES-128 in GCM mode.
These high-level methods do not, however, directly invoke the cryptographic primitives. Instead, they call into a middle layer which implements each cryptographic algorithm as a self-contained function.
Figure 2. Cryptographic library dependancies.
The private methods that make up our middle layer are built on the cryptographic primitives and support functions provided by the underlying cryptographic library, as illustrated in Figure 2. The non-Intel SGX implementation relies on Microsoft’s Cryptography API: Next Generation (CNG) for these, but we can’t use this same library inside the enclave because an enclave cannot have dependencies on external DLLs. To build the Intel SGX version of this class, we need to replace those underlying functions with the ones in the trusted crypto library that is distributed with the Intel SGX SDK. (As you might recall from Part 2, we were careful to choose cryptographic functions that were common to both CNG and the Intel SGX trusted crypto library for this very reason.)
To create our enclave-capable Crypto class, which we’ll call E_Crypto, what we need to do is modify these private methods:
crypto_status_t aes_128_gcm_encrypt(PBYTE key, PBYTE nonce, ULONG nonce_len, PBYTE pt, DWORD pt_len, PBYTE ct, DWORD ct_sz, PBYTE tag, DWORD tag_len); crypto_status_t aes_128_gcm_decrypt(PBYTE key, PBYTE nonce, ULONG nonce_len, PBYTE ct, DWORD ct_len, PBYTE pt, DWORD pt_sz, PBYTE tag, DWORD tag_len); crypto_status_t sha256_multi (PBYTE *messages, ULONG *lengths, BYTE hash[32]);
A description of each, and the primitives and support functions from CNG upon which they are built, is given in Table 1.
Method | Algorithm | CNG Primitives and Support Functions |
---|---|---|
aes_128_gcm_encrypt | AES encryption in GCM mode with:
| BCryptOpenAlgorithmProvider |
aes_128_gcm_decrypt | AES encryption in GCM mode with:
| BCryptOpenAlgorithmProvider |
sha256_multi | SHA-256 hash (incremental) | BCryptOpenAlgorithmProvider |
Table 1. Mapping Crypto class methods to Cryptography API: Next Generation functions
CNG provides very fine-grained control over its encryption algorithms, as well as several optimizations for performance. Our Crypto class is actually fairly inefficient: each time one of these algorithms is called, it initializes the underlying primitives from scratch and then completely closes them down. This is not a significant issue for a password manager, which is UI-driven and only encrypts a small amount of data at a time. A high-performance server application such as a web or database server would need a more sophisticated approach.
The API for the trusted cryptography library distributed with the Intel SGX SDK more closely resembles our middle layer than CNG. There is less granular control over the underlying primitives, but it does make developing our E_Crypto class much simpler. Table 2 shows the new mapping between our middle layer and the underlying provider.
Method | Algorithm | Intel® SGX Trusted Cryptography Library Primitives and Support Functions |
---|---|---|
aes_128_gcm_encrypt | AES encryption in GCM mode with:
| sgx_rijndael128GCM_encrypt |
aes_128_gcm_decrypt | AES encryption in GCM mode with:
| sgx_rijndael128GCM_decrypt |
sha256_multi | SHA-256 hash (incremental) | sgx_sha256_init |
Table 2. Mapping Crypto class methods to Intel® SGX Trusted Cryptography Library functions
The DRNG Class
The DRNG class is the interface to the on-chip digital random number generator, courtesy of Intel® Secure Key. To stay consistent with our previous actions we’ll name the enclave version of this class E_DRNG.
We’ll be making two changes in this class to prepare it for the enclave, but both of these changes are internal to the class methods. The class declaration will stay the same.
The CPUID Instruction
One of our application requirements is that the CPU supports Intel Secure Key. Even though Intel SGX is a newer feature than Secure Key, there is no guarantee that all future generations of all possible CPUs which support Intel SGX will also support Intel Secure Key. While it’s hard to conceive of such a situation today, best practice is to not assume a coupling between features where one does not exist. If a set of features have independent detection mechanisms, then you must assume that the features are independent of one another and check for them separately. This means that no matter how tempting it may be to assume that a CPU with support for Intel SGX will also support Intel Secure Key, we absolutely must not do so under any circumstances.
Further complicating the situation is the fact that Intel Secure Key actually consists of two independent features, both of which must also be checked separately. Our application must determine support for both the RDRAND and RDSEED instructions. For more information on Intel Secure Key, see the Intel Digital Random Number Generator (DRNG) Software Implementation Guide.
The constructor in the DRNG class is responsible for the RDRAND and RDSEED feature detection checks. It makes the necessary calls to the CPUID instruction using the compiler intrinsics __cpuid and __cpuidex, and sets a static, global variable with the results.
static int _drng_support= DRNG_SUPPORT_UNKNOWN; static int _drng_support= DRNG_SUPPORT_UNKNOWN; DRNG::DRNG(void) { int info[4]; if (_drng_support != DRNG_SUPPORT_UNKNOWN) return; _drng_support= DRNG_SUPPORT_NONE; // Check our feature support __cpuid(info, 0); if ( memcmp(&(info[1]), "Genu", 4) || memcmp(&(info[3]), "ineI", 4) || memcmp(&(info[2]), "ntel", 4) ) return; __cpuidex(info, 1, 0); if ( ((UINT) info[2]) & (1<<30) ) _drng_support|= DRNG_SUPPORT_RDRAND; #ifdef COMPILER_HAS_RDSEED_SUPPORT __cpuidex(info, 7, 0); if ( ((UINT) info[1]) & (1<<18) ) _drng_support|= DRNG_SUPPORT_RDSEED; #endif }
The problem for the E_DRNG class is that CPUID is not a legal instruction inside of an enclave. To call CPUID, one must use an OCALL to exit the enclave and then invoke CPUID in untrusted code. Fortunately, the Intel SGX SDK designers have created two convenient functions that greatly simplify this task: sgx_cpuid and sgx_cpuidex. These functions automatically perform the OCALL for us, and the OCALL itself is automatically generated. The only requirement is that the EDL file must import the sgx_tstdc.edl
header:
enclave { /* Needed for the call to sgx_cpuidex */ from "sgx_tstdc.edl" import *; trusted { /* define ECALLs here. */ public int ve_initialize (); public int ve_initialize_from_header ([in, count=len] unsigned char *header, uint16_t len); /* Our other ECALLs have been omitted for brevity */ }; untrusted { }; };
The feature detection code in the E_DRNG constructor becomes:
static int _drng_support= DRNG_SUPPORT_UNKNOWN; E_DRNG::E_DRNG(void) { int info[4]; sgx_status_t status; if (_drng_support != DRNG_SUPPORT_UNKNOWN) return; _drng_support = DRNG_SUPPORT_NONE; // Check our feature support status= sgx_cpuid(info, 0); if (status != SGX_SUCCESS) return; if (memcmp(&(info[1]), "Genu", 4) || memcmp(&(info[3]), "ineI", 4) || memcmp(&(info[2]), "ntel", 4)) return; status= sgx_cpuidex(info, 1, 0); if (status != SGX_SUCCESS) return; if (info[2]) & (1 << 30)) _drng_support |= DRNG_SUPPORT_RDRAND; #ifdef COMPILER_HAS_RDSEED_SUPPORT status= __cpuidex(info, 7, 0); if (status != SGX_SUCCESS) return; if (info[1]) & (1 << 18)) _drng_support |= DRNG_SUPPORT_RDSEED; #endif }
Because calls to the CPUID instruction must take place in untrusted memory, the results of CPUID cannot be trusted! This warning applies whether you run CPUID yourself or rely on the SGX functions to do it for you. The Intel SGX SDK offers this advice: “Code should verify the results and perform a threat evaluation to determine the impact on trusted code if the results were spoofed.” In our tutorial password manager, there are three possible outcomes:
|
Generating Seeds from RDRAND
In the event that the underlying CPU does not support the RDSEED instruction, we need to be able to use the RDRAND instruction to generate random seeds that are functionally equivalent to what we would have received from RDSEED if it were available. The Intel Digital Random Number Generator (DRNG) Software Implementation Guide describes the process of obtaining random seeds from RDRAND in detail, but the short version is that one method for doing this is to generate 512 pairs of 128-bit values and mix the intermediate values together using the CBC-MAC mode of AES to produce a single, 128-bit seed. The process is repeated to generate as many seeds as necessary.
In the non-Intel SGX code path, the method seed_from_rdrand uses CNG to build the cryptographic algorithm. Since the Intel SGX code path can’t depend on CNG, we once again need to turn to the trusted cryptographic library that is distributed with the Intel SGX SDK. The changes are summarized in Table 3.
Algorithm | CNG Primitives and Support Functions | Intel® SGX Trusted Cryptography Library Primitives and Support Functions |
---|---|---|
aes-cmac | BCryptOpenAlgorithmProvider | sgx_cmac128_init |
Table 3. Cryptographic function changes to the E_DRNG class’s seed_from_rdrand method
Why is this algorithm embedded in the DRNG class and not implemented in the Crypto class with the other cryptographic algorithms? This is simply a design decision. The DRNG class only needs this one algorithm, so we chose not to create a co-dependency between DRNG and Crypto (currently, Crypto does depend on DRNG). The Crypto class is also structured to provide the cryptographic services for vault operations rather than function as a general-purpose cryptographic API.
Why Not Use sgx_read_rand?
The Intel SGX SDK provides the function sgx_read_rand as a means of obtaining random numbers inside of an enclave. There are three reasons why we aren’t using it:
- As documented in the Intel SGX SDK, this function is “provided to replace the C standard pseudo-random sequence generation functions inside the enclave, since these standard functions are not supported in the enclave, such as rand, srand, etc.” While sgx_read_rand does call the RDRAND instruction if it is supported by the CPU, it falls back to the trusted C library’s implementation of srand and rand if it is not. The random numbers produced by the C library are not suitable for cryptographic use. It is highly unlikely that this situation will ever occur, but as mentioned in the section on CPUID, we must not assume that it will never occur.
- There is no Intel SGX SDK function for calling the RDSEED instruction and that means we still have to use compiler intrinsics in our code. While we could replace the RDRAND intrinsics with calls to sgx_read_rand, it would not gain us anything in terms of code management or structure and it would cost us additional time.
- The intrinsics will marginally outperform sgx_read_rand since there is one less layer of function calls in the resulting code.
Wrapping Up
With these code changes, we have a fully functioning enclave! However, there are still some inefficiencies in the implementation and some gaps in functionality, and we’ll revisit the enclave design in Parts 7 and 8 in order to address them.
As mentioned in the introduction, there is sample code provided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager core, including the enclave and its wrapper functions. This source code should be functionally identical to Part 3, only we have hardcoded Intel SGX support to be on.
Coming Up Next
In Part 6 of the tutorial we’ll add dynamic feature detection to the password manager, allowing it to choose the appropriate code path based on whether or not Intel SGX is supported on the underlying platform. Stay tuned!
Intel® Software Guard Extensions Tutorial Series: Part 6, Dual Code Paths
In Part 6 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series, we set aside the enclave to address an outstanding design requirement that was laid out in Part 2, Application Design: provide support for dual code paths. We want to make sure our Tutorial Password Manager will function on hosts both with and without Intel SGX capability. Much of the content in this part comes from the article, Properly Detecting Intel® Software Guard Extensions in Your Applications.
You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
There is source code provided with this installment of the series.
All Intel® Software Guard Extensions Applications Need Dual Code Paths
First it’s important to point out that all Intel SGX applications must have dual code paths. Even if an application is written so that it should only execute if Intel SGX is available and enabled, a fallback code path must exist so that you can present a meaningful error message to the user and then exit gracefully.
In short, an application should never crash or fail to launch solely because the platform does not support Intel SGX.
Scoping the Problem
In Part 5 of the series we completed our first version of our application enclave and tested it by hardcoding the enclave support to be on. That was done by setting the _supports_sgx flag in PasswordCoreNative.cpp.
PasswordManagerCoreNative::PasswordManagerCoreNative(void) { _supports_sgx= 1; adsize= 0; accountdata= NULL; timer = NULL; }
Obviously, we can’t leave this on by default. The convention for feature detection is that features are off by default and turned on if they are detected. So our first step is to undo this change and set the flag back to 0, effectively disabling the Intel SGX code path.
PasswordManagerCoreNative::PasswordManagerCoreNative(void) { _supports_sgx= 0; adsize= 0; accountdata= NULL; timer = NULL; }
However, before we get into the feature detection procedure, we’ll give the console application that runs our test suite, CLI Test App, a quick functional test by executing it on an older system that does not have the Intel SGX feature. With this flag set to zero, the application will not choose the Intel SGX code path and thus should run normally.
Here’s the output from a 4th generation Intel® Core™ i7 processor-based laptop, running Microsoft Windows* 8.1, 64-bit. This system does not support Intel SGX.
What Happened?
Clearly we have a problem even when the Intel SGX code path is explicitly disabled in the software. This application, as written, cannot execute on a system without Intel SGX support. It didn’t even start executing. So what’s going on?
The clue in this case comes from the error message in the console window:
System.IO.FileNotFoundException: Could not load file or assembly ‘PasswordManagerCore.dll’ or one of its dependencies. The specified file could not be found.
Let’s take a look at PasswordManagerCore.dll and its dependencies:
In addition to the core OS libraries, we have dependencies on bcrypt.lib and EnclaveBridge.lib
, which will require bcrypt.dll
and EnclaveBridge.dll
at runtime. Since bcrypt.dll
comes from Microsoft and is included in the OS, we can reasonably assume its dependencies, if any, are already installed. That leaves EnclaveBridge.dll.
Examining its dependencies, we see the following:
This is the problem. Even though we have the Intel SGX code path explicitly disabled, EnclaveBridge.dll
still has references to the Intel SGX runtime libraries. All symbols in an object module must be resolved as soon as it is loaded. It doesn’t matter if we disable the Intel SGX code path: undefined symbols are still present in the DLL. When PasswordManagerCore.dll
loads, it resolves its undefined symbols by loading bcrypt.dll
and EnclaveBridge.dll
, the latter of which, in turn, attempts to resolve its undefined symbols by loading sgx_urts.dll
and sgx_uae_service.dll
. The system we tried to run our command-line test application on does not have these libraries, and since the OS can’t resolve all of the symbols it throws an exception and the program crashes before it even starts.
These two DLLs are part of the Intel SGX Platform Software (PSW) package, and without them Intel SGX applications written using the Intel SGX Software Development Kit (SDK) cannot execute. Our application needs to be able to run even if these libraries are not present.
The Platform Software Package
As mentioned above, the runtime libraries are part of the PSW. In addition to these support libraries, the PSW includes:
- Services that support and maintain the trusted compute block (TCB) on the system
- Services that perform and manage certain Intel SGX operations such as attestation
- Interfaces to platform services such as trusted time and the monotonic counters
The PSW must be installed by the application installer when deploying an Intel SGX application, because Intel does not offer the PSW for direct download by end users. Software vendors must not assume that it will already be present and installed on the destination system. In fact, the license agreement for Intel SGX specifically states that licensees must re-distribute the PSW with their applications.
We’ll discuss the PSW installer in more detail in a future installment of the series covering packaging and deployment.
Detecting Intel Software Guard Extensions Support
So far we’ve focused on the problem of just starting our application on systems without Intel SGX support, and more specifically, without the PSW. The next step is to detect whether or not Intel SGX support is present and enabled once the application is running.
Intel SGX feature detection is, unfortunately, a complicated procedure. For a system to be Intel SGX capable, four conditions must be met:
- The CPU must support Intel SGX.
- The BIOS must support Intel SGX.
- In the BIOS, Intel SGX must be explicitly enabled or set to the “software controlled” state.
- The PSW must be installed on the platform.
Note that the CPUID instruction, alone, is not sufficient to detect the usability of Intel SGX on a platform. It can tell you whether or not the CPU supports the feature, but it doesn’t know anything about the BIOS configuration or the software that is installed on a system. Relying solely on the CPUID results to make decisions about Intel SGX support can potentially lead to a runtime fault.
To make feature detection even more difficult, examining the state of the BIOS is not a trivial task and is generally not possible from a user process. Fortunately the Intel SGX SDK provides a simple solution: the function sgx_enable_device will both check for Intel SGX capability and attempt to enable it if the BIOS is set to the software control state (the purpose of the software control setting is to allow applications to enable Intel SGX without requiring users to reboot their systems and enter their BIOS setup screens, a particularly daunting and intimidating task for non-technical users).
The problem with sgx_enable_device, though, is that it is part of the Intel SGX runtime, which means the PSW must be installed on the system in order to use it. So before we attempt to call sgx_enable_device, we must first detect whether or not the PSW is present.
Implementation
With our problem scoped out, we can now lay out the steps that must be followed, in order, for our dual-code path application to function properly. Our application must:
- Load and begin executing even without the Intel SGX runtime libraries.
- Determine whether or not the PSW package is installed.
- Determine whether or not Intel SGX is enabled (and attempt to enable it).
Loading and Executing without the Intel Software Guard Extensions Runtime
Our main application depends on PasswordManagerCore.dll, which depends on EnclaveBridge.dll, which in turn depends on the Intel SGX runtime. Since all symbols need to be resolved when an application loads, we need a way to prevent the loader from trying to resolve symbols that come from the Intel SGX runtime libraries. There are two options:
Option #1: Dynamic Loading
In dynamic loading, you don’t explicitly link the library in the project. Instead you use system calls to load the library at runtime and then look up the names of each function you plan to use in order to get the addresses of where they have been placed in memory. Functions in the library are then invoked indirectly via function pointers.
Dynamic loading is a hassle. Even if you only need a handful of functions, it can be a tedious process to prototype function pointers for every function that is needed and get their load address, one at a time. You also lose some of the benefits provided by the integrated development environment (such as prototype assistance) since you are no longer explicitly calling functions by name.
Dynamic loading is typically used in extensible application architectures (for example, plug-ins).
Option #2: Delayed-Loaded DLLs
In this approach, you dynamically link all your libraries in the project, but instruct Windows to do delayed loading of the problem DLL. When a DLL is delay-loaded, Windows does not attempt to resolve symbols that are defined by that DLL when the application starts. Instead it waits until the program makes its first call to a function that is defined in that DLL, at which point the DLL is loaded and the symbols get resolved (along with any of its dependencies). What this means is that a DLL is not loaded until the application needs it. A beneficial side effect of this approach is that it allows applications to reference a DLL that is not installed, so long as no functions in that DLL are ever called.
When the Intel SGX feature flag is off, that is exactly the situation we are in so we will go with option #2.
You specify the DLL to be delay-loaded in the project configuration for the dependent application or DLL. For the Tutorial Password Manager, the best DLL to mark for delayed loading is EnclaveBridge.dll as we only call this DLL if the Intel SGX path is enabled. If this DLL doesn’t load, neither will the two Intel SGX runtime DLLS.
We set the option in the Linker -> Input page of the PasswordManagerCore.dll project configuration:
After the DLL is rebuilt and installed on our 4th generation Intel Core processor system, the console test application works as expected.
Detecting the Platform Software Package
Before we can call the sgx_enable_device function to check for Intel SGX support on the platform, we first have to make sure that the PSW package is installed because sgx_enable_device is part of the Intel SGX runtime. The best way to do this is to actually try to load the runtime libraries.
We know from the previous step that we can’t just dynamically link them because that will cause an exception when we attempt to run the program on a system that does not support Intel SGX (or have the PSW package installed). But we also can’t rely on delay-loaded DLLs either: delayed loading can’t tell us if a library is installed because if it isn’t, the application will still crash! That means we must use dynamic loading to test for the presence of the runtime libraries.
The PSW runtime libraries should be installed in the Windows system directory so we’ll use GetSystemDirectory to get that path, and limit the DLL search path via a call to SetDllDirectory. Finally, the two libraries will be loaded using LoadLibrary. If either of these calls fail, we know the PSW is not installed and that the main application should not attempt to run the Intel SGX code path.
Detecting and Enabling Intel Software Guard Extensions
Since the previous step dynamically loads the PSW runtime libraries, we can just look up the symbol for sgx_enable_device manually and then invoke it via a function pointer. The result will tell us whether or not Intel SGX is enabled.
Implementation
To implement this in the Tutorial Password Manager we’ll create a new DLL called FeatureSupport.dll. We can safely dynamically link this DLL from the main application since it has no explicit dependencies on other DLLs.
Our feature detection will be rolled into a C++/CLI class called FeatureSupport, which will also include some high-level functions for getting more information about the state of Intel SGX. In rare cases, enabling Intel SGX via software may require a reboot, and in rarer cases the software enable action fails and the user may be forced to enable it explicitly in their BIOS.
The class declaration for FeatureSupport is shown below.
typedef sgx_status_t(SGXAPI *fp_sgx_enable_device_t)(sgx_device_status_t *); public ref class FeatureSupport { private: UINT sgx_support; HINSTANCE h_urts, h_service; // Function pointers fp_sgx_enable_device_t fp_sgx_enable_device; int is_psw_installed(void); void check_sgx_support(void); void load_functions(void); public: FeatureSupport(); ~FeatureSupport(); UINT get_sgx_support(void); int is_enabled(void); int is_supported(void); int reboot_required(void); int bios_enable_required(void); // Wrappers around SGX functions sgx_status_t enable_device(sgx_device_status_t *device_status); };
Here are the low-level routines that check for the PSW package and attempt to detect and enable Intel SGX.
int FeatureSupport::is_psw_installed() { _TCHAR *systemdir; UINT rv, sz; // Get the system directory path. Start by finding out how much space we need // to hold it. sz = GetSystemDirectory(NULL, 0); if (sz == 0) return 0; systemdir = new _TCHAR[sz + 1]; rv = GetSystemDirectory(systemdir, sz); if (rv == 0 || rv > sz) return 0; // Set our DLL search path to just the System directory so we don't accidentally // load the DLLs from an untrusted path. if (SetDllDirectory(systemdir) == 0) { delete systemdir; return 0; } delete systemdir; // No longer need this // Need to be able to load both of these DLLs from the System directory. if ((h_service = LoadLibrary(_T("sgx_uae_service.dll"))) == NULL) { return 0; } if ((h_urts = LoadLibrary(_T("sgx_urts.dll"))) == NULL) { FreeLibrary(h_service); h_service = NULL; return 0; } load_functions(); return 1; } void FeatureSupport::check_sgx_support() { sgx_device_status_t sgx_device_status; if (sgx_support != SGX_SUPPORT_UNKNOWN) return; sgx_support = SGX_SUPPORT_NO; // Check for the PSW if (!is_psw_installed()) return; sgx_support = SGX_SUPPORT_YES; // Try to enable SGX if (this->enable_device(&sgx_device_status) != SGX_SUCCESS) return; // If SGX isn't enabled yet, perform the software opt-in/enable. if (sgx_device_status != SGX_ENABLED) { switch (sgx_device_status) { case SGX_DISABLED_REBOOT_REQUIRED: // A reboot is required. sgx_support |= SGX_SUPPORT_REBOOT_REQUIRED; break; case SGX_DISABLED_LEGACY_OS: // BIOS enabling is required sgx_support |= SGX_SUPPORT_ENABLE_REQUIRED; break; } return; } sgx_support |= SGX_SUPPORT_ENABLED; } void FeatureSupport::load_functions() { fp_sgx_enable_device = (fp_sgx_enable_device_t)GetProcAddress(h_service, "sgx_enable_device"); } // Wrappers around SDK functions so the user doesn't have to mess with dynamic loading by hand. sgx_status_t FeatureSupport::enable_device(sgx_device_status_t *device_status) { check_sgx_support(); if (fp_sgx_enable_device == NULL) { return SGX_ERROR_UNEXPECTED; } return fp_sgx_enable_device(device_status); }
Wrapping Up
With these code changes, we have integrated Intel SGX feature detection into our application! It will execute smoothly on systems both with and without Intel SGX support and choose the appropriate code branch.
As mentioned in the introduction, there is sample code provided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager core, including the new feature detection DLL. Additionally, we have added a new GUI-based test program that automatically selects the Intel SGX code path, but lets you disable it if desired (this option is only available if Intel SGX is supported on the system).
The console-based test program has also been updated to detect Intel SGX, though it cannot be configured to turn it off without modifying the source code.
Coming Up Next
We’ll revisit the enclave in Part 7 in order to fine-tune the interface. Stay tuned!
Intel Software Guard Extensions Tutorial Series update: a new SDK version and a brief intermission
An update to the Windows version of the Intel Software Guard Extensions SDK was just posted to the Developer Zone. This new release, version 1.7, adds Visual Studio Professional 2015 Update 3 to the list of supported Microsoft IDE's and the Intel SGX Tutorial Series will take a brief break to retool for the new development environment. My goal is to have the next release posted before the end of November. This is also a good time to integrate the Tutorial Password Manager's GUI with the core code base.
I apologize for the delay, but I feel it is important to move the tutorial series to the most recent toolkit (at least while it is being actively developed). I will also validate the code samples already released to date, and update any that have serious build issues.
Thank you for your patience!
Intel® Software Guard Extensions Tutorial Series: Part 7, Refining the Enclave
Part 7 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series revisits the enclave interface and adds a small refinement to make it simpler and more efficient. We’ll discuss how the proxy functions marshal data between unprotected memory space and the enclave, and we’ll also discuss one of the advanced features of the Enclave Definition Language (EDL) syntax.
You can find a list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.
Source code is provided with this installment of the series. With this release we have migrated the application to the 1.7 release of the Intel SGX SDK and also moved our development environment to Microsoft Visual Studio* Professional 2015.
The Proxy Functions
When building an enclave using the Intel SGX SDK you define the interface to the enclave in the EDL. The EDL specifies which functions are ECALLs (“enclave calls,” the functions that enter the enclave) and which ones are OCALLs (“outside calls,” the calls to untrusted functions from within the enclave).
When the project is built, the Edger8r tool that is included with the Intel SGX SDK parses the EDL file and generates a series of proxy functions. These proxy functions are essentially wrappers around the real functions that are prototyped in the EDL. Each ECALL and OCALL gets a pair of proxy functions: a trusted half and an untrusted half. The trusted functions go into EnclaveProject_t.h and EnclaveProjct_t.c and are included in the Autogenerated Files folder of your enclave project. The untrusted proxies go into EnclaveProject_u.h and EnclaveProject_u.c and are placed in the Autogenerated Files folder of the project that will be interfacing with your enclave.
Your program does not call the ECALL and OCALL functions directly; it calls the proxy functions. When you make an ECALL, you call the untrusted proxy function for the ECALL, which in turn calls the trusted proxy function inside the enclave. That proxy then calls the “real” ECALL and the return value propagates back to the untrusted function. This sequence is shown in Figure 1. When you make an OCALL, the sequence is reversed: you call the trusted proxy function for the OCALL, which calls an untrusted proxy function outside the enclave that, in turn, invokes the “real” OCALL.
Figure 1. Proxy functions for an ECALL.
The proxy functions are responsible for:
- Marshaling data into and out of the enclave
- Placing the return value of the real ECALL or OCALL in an address referenced by a pointer parameter
- Returning the success or failure of the ECALL or OCALL itself as an sgx_status_t value
Note that this means that each ECALL or OCALL has potentially two return values. There’s the success of the ECALL or OCALL itself, meaning, were we able to successfully enter or exit the enclave, and then the return value of the function being called in the ECALL or OCALL.
The EDL syntax for the ECALL functions ve_lock() and ve_unlock() in our Tutorial Password Manager’s enclave is shown below:
enclave { trusted { public void ve_lock (); public int ve_unlock ([in, string] char *password); } }
And here are the untrusted proxy function prototypes that are generated by the Edger8r tool:
sgx_status_t ve_lock(sgx_enclave_id_t eid); sgx_status_t ve_unlock(sgx_enclave_id_t eid, int* retval, char* password);
Note the additional arguments that have been added to the parameter list for each function and that the functions now return a type of sgx_status_t.
Both proxy functions need the enclave identifier, which is passed in the first parameter, eid. The ve_lock() function has no parameters and does not return a value so no further changes are necessary. The ve_unlock() function, however, does both. The second argument to the proxy function is a pointer to an address that will store the return value from the real ve_unlock() function in the enclave, in this case a return value of type int. The actual function parameter, char *password, is included after that.
Data Marshaling
The untrusted portion of an application does not have access to enclave memory. It cannot read from or write to these protected memory pages. This presents some difficulties when the function parameters include pointers. OCALLs are especially problematic, because a memory allocated inside the enclave is not accessible to the OCALL, but even ECALLs can have issues. Enclave memory is mapped into the application’s memory space, so enclave pages can be adjacent to unprotected memory pages. If you pass a pointer to untrusted memory into an enclave, and then fail to do appropriate bounds checking in your enclave, you may inadvertently cross the enclave boundary when reading or writing to that memory in your ECALL.
The Intel SGX SDK’s solution to this problem is to copy the contents of data buffers into and out of enclaves, and have the ECALLs and OCALLs operate on these copies of the original memory buffer. When you pass a pointer into an enclave, you specify in the EDL whether the buffer referenced by the pointer is being pass into the call, out of the call, or in both directions, and then you specify the size of the buffer. The proxy functions generated by the Edger8r tool use this information to check that the address range does not cross the enclave boundary, copy the data into or out of the enclave as indicated, and then substitute a pointer to the copy of the buffer in place of the original pointer.
This is the slow-and-safe approach to marshaling data and pointers between unprotected memory and enclave memory. However, this approach has drawbacks that may make it undesirable in some cases:
- It’s slow, since each memory buffer is checked and copied.
- It requires additional heap space in your enclave to store the copies of the data buffers.
- The EDL syntax is a little verbose.
There are also cases where you just need to pass a raw pointer into an ECALL and out to an OCALL without it ever being used inside the enclave, such as when passing a function pointer for a callback function straight through to an OCALL. In this case, there is no data buffer per se, just the pointer address itself, and the marshaling functions generated by Edger8r actually get in the way.
The Solution: user_check
Fortunately, the EDL language does support passing a raw pointer address into an ECALL or an OCALL, skipping both the boundary checks and the data buffer copy. The user_check
parameter tells the Edger8r tool to pass a pointer as it is and assume that the developer has done the proper bounds checking on the address. When you specify user_check
you are essentially trading safety for performance.
A pointer marked with the user_check
does not have a direction (in
or out
) associated with it, because there is no buffer copy taking place. Mixing user_check
with in
or out
will result in an error at compile time. Similarly, you don’t supply a count
or size
parameter, either.
In the Tutorial Password Manager, the most appropriate place to use the user_check
parameter is in the ECALLs that load and store the encrypted password vault. While our design constraints put a practical limit on the size of the vault itself, generally speaking these sorts of bulk reads and writes benefit from allowing the enclave to directly operate on untrusted memory.
The original EDL for ve_load_vault() and ve_get_vault() looks like this:
public int ve_load_vault ([in, count=len] unsigned char *edata, uint32_t len); public int ve_get_vault ([out, count=len] unsigned char *edata, uint32_t len);
Rewriting these to specify user_check
results in the following:
public int ve_load_vault ([user_check] unsigned char *edata); public int ve_get_vault ([user_check] unsigned char *edata, uint32_t len);
Notice that we were able to drop the len parameter from ve_load_vault(). As you might recall from Part 4, the issue we had with this function was that although the length of the vault is stored as a variable in the enclave, the proxy functions don’t have access to it. In order for the ECALL’s proxy functions to copy the incoming data buffer, we had to supply the length in the EDL so that the Edger8r tool would know the size of the buffer. With the user_check
option, there is no buffer copy operation, so this problem goes away. The enclave can read directly from untrusted memory, and it can use its internal variable to determine how many bytes to read.
However, we still send the length as a parameter to ve_get_vault(). This is a safety check to ensure that we don’t accidentally overflow a buffer when fetching the encrypted vault from the enclave.
Summary
The EDL provides three options for passing pointers into an ECALL or an OCALL: in
, out
, and user_check
. These options are summarized in Table 1.
Specifier/Direction | ECALL | OCALL |
---|---|---|
in | The buffer is copied from the application into the enclave. Changes will only affect the buffer inside the enclave. | The buffer is copied from the enclave to the application. Changes will only affect the buffer outside the enclave. |
out | A buffer will be allocated inside the enclave and initialized with zeros. It will be copied to the original buffer when the ECALL exits. | A buffer will be allocated outside the enclave and initialized with zeros. This untrusted buffer will be copied to the original buffer in the enclave when the OCALL exits. |
in, out | Data is copied back and forth. | Data is copied back and forth. |
user_check | The pointer is not checked. The raw address is passed. | The pointer is not checked. The raw address is passed. |
Table 1. Pointer specifiers and their meanings in ECALLs and OCALLs.
If you use the direction indicators, the data buffer referenced by your pointer gets copied and you must supply a count so that the Edger8r can determine how many bytes are in the buffer. If you specify user_check
, the raw pointer is passed to the ECALL or OCALL unaltered.
Sample Code
The code sample for this part of the series has been updated to build against the Intel SGX SDK version 1.7 using Microsoft Visual Studio 2015. It should still work with the Intel SGX SDK version 1.6 and Visual Studio 2013, but we encourage you to update to the newer release of the Intel SGX SDK.
Coming Up Next
In Part 8 of the series, we’ll add support for power events. Stay tuned!
Intel® Software Guard Extensions Tutorial Series change and delay
A high priority project combined with the holidays took me away from the Intel® Software Guard Extensions Tutorial Series briefly, but it is moving ahead again and I expect Part 8 to come out in the next two or three weeks. Alas, I'll be covering a smaller topic before proceeding to power events and data sealing, but the silver lining is that the long-promised GUI will finally make its appearance. Part 8 will cover the GUI integration and some of the challenges of building a security application on top of a managed code base, and in particular the goal of not undermining the security that is provided by Intel SGX. Power events and data sealing will come in Part 9.
As always, thank you for following along!