ApexSharp Salesforce C# API
A Inteligent C# Wrapper for Salesforce REST API
Last updated
A Inteligent C# Wrapper for Salesforce REST API
Last updated
Feedback
To provide feedback or ask questions, kindly utilize the GitHub discussion function. Follow this link to access the GitHub discussion and leave your comments. Your feedback is highly appreciated.
Project
Build an advanced but easy-to-use C# SDK that can efficiently interact with Salesforce APIs. My current plan is to explore its capabilities and determine how it can best support the following APIs.
SObjects
Bulk V2
Tooling
Data Cloud
Helper
Goals
Take advantage of and the latest features developed up to : Currently, .NET 8 is in beta, and the release is in November, according to Microsoft.
LINQ, instead of dealing with SOQL.
Take advantage of async / await in C#
Hide the complexity: For example Select<Contact>()
will return all the contact objects. the API chooses the best path based on the number of records; this can be 1 record or 100 million.
Since joining Salesforce a few years ago, I have gained extensive knowledge about how the Salesforce API works. Definitely making use of this expertise.
Speed-wise, it's almost identical to making raw REST/JSON calls.
Use some of the most used Nuget Opensource libraries
Read Records - Select
The REST calls and pagination is hidden from the developer. If there are 50K records and if you do not put a limit, you will get all 50K records.
For example, the above code most of us will assume it will generate the following SOQL
But that is not valid as there is a limit of 200 records when you get all the fields. To overcome this, first, we make the following call to find the record count.
If the record count is less than 200, the following SOQL will be run
If the record count is between 200 and 2200, multiple calls are made
If it's more than 2200 records, things get a bit complicated.
Since we will know all the field names, we will create a SOQL with all the field names
Then query 2000 records at a time and append to a List
The user will get ALL the records.
If we have more than 2200 records, we will make multiple queries.
Keep in Simple
Safety
Salesforce allocates a limit on the API use per 24-hour time frame. If you run out of them, it's bad news, as many things that depend on the REST API will stop working. Due to this, you want to check before making a Rest call.
Due to this, we support an estimate on large CRUD operations. For example, the following code shows how you can get an estimate on the API usage, how many API calls are left and then add a 20% margin and make the call
Partial Return
By adding StopAt(), you are telling when to stop. In the above example, once the API left number reaches 90%, we will not make any more API calls and will return what has been received.
Todo: How will the developer know this is a partial return?
Lazy Loading
Let us say we want to retrieve a Contact object and want to find the name of the user who created it
Note Console.WriteLine(contactById.Account.Name),
CreatedBy returns a User object. When this is called, we make another REST call to get details about the Name field in the Account object.
However Console.WriteLine(contactById.AccountId)
does not require another REST call, as the first call would have obtained that value
This way, we prevent calling the entire object tree when the Contact object is retrieved.
However, the following will make only a single call and only retrieve the fields we have specified.
Child Depth Level
You can specify how deep your query should go. The SDK is smart enough not to go into a recursive more.
Insert
Salesforce object inserts API supports multiple ways of inserting an object or list of objects. How inserts are done, and errors are managed is abstracted from the developer. The three methods are
In Salesforce, you can pass up to 200 objects in a single REST call. By default, if you pass a list of 1000 Contact objects, we split it into 5 REST calls of 200 records each. All REST calls are made concurrently.
Optionally you can specify a smaller batch size. For example, f you pass a list of 1000 Contact objects and a batch size of 10, we will make 100 REST calls concurrently
In testing, we have found smaller batch size will lead to faster performance. The downside is you will be using your API limits.
Here is an example of inserting a Contact object with Two Fields.
One Record
1,801
1
1000 Records, Batch Size 200
6,604
5
1000 Records, Batch Size 10
155
100
One of the biggest issues is error management. If you try inserting 1M records at 200 batch size this is 5000 REST calls, which can be made concurrently. Unfortunately, the whole insert process going fine is slim.
One of the issues with this approach is that we will get "ServerError" from Salesforce, and the records will not be inserted, updated, or deleted.
The same logic is applied to Delete and Update
SELECT
Raw SELECT
Line 2: The ability to submit raw SOQL and get the results bound to a class
Line 4: Same as above, but you get the actual JSON
Line 6: Get a single record by its ID
Upsert
Send a list of objects; if the Id is null or empty, it will be an insert. If we have an Id on the record it will be an update
Delete
Bulk API
C# objects to CSV conversion are done automatically
If the CSV File is more than 100MB, the files will be automatically chunked
Parent / Child relationships are automatically mapped from the C# object
GZip Support for responses in the background
You can pass a Delegate callback to get incremental updates on your bulk API request
Convenience Methods
Exception Handling
Feedback
To provide feedback or ask questions, kindly utilize the GitHub discussion function. Follow this link to access the GitHub discussion and leave your comments. Your feedback is highly appreciated.
For every insert we keep track of and failed inserts are retried, When this happens we will use to retry, and if that fails, log all the failed objects.
When this happens we will use to retry, and if that fails, log all the failed objects.
A local database such as will be used. The way this will work is each object will be given a GUID, and if there is a failure, those objects will be saved to LiteDB. Please feel free to start a conversation on the GitHub discussion section .
If you have a very large data set, you can use