The Gemli Project is defunct.

Microsoft caught up with the likes of Fluent NHibernate by adding POCO support to Entity Framework, and by continuing to advance ASP.NET MVC, adopt jQuery, and engineer the Razor view engine, there is sufficient capability in the existing .NET 4.0 framework to pursue abrupt productivity with maintainability. With the absence of feedback or support (total absence, not even a single comment, thumbs-up, or complaint) for Gemli, there's no longer much point in even having the project available.

Project Description
A lightweight, configuration-optional, generator-free persistence and O/RM solution for C# and a few other minor tools to make developers' lives easier.

There are multiple facets to the Gemli project, but the first and biggest facet is the Gemli O/RM library, Gemli.Data.

Gemli.Data - Documentation @ http://www.gemli-project.org/wiki/Gemli.Data.ashx

Gemli.Data was created partly out of a years-old personal curiosity in the field of O/RMs, and partly out of frustration with the unoptimized programming workflow of raw ADO.NET, the learning curve of nHibernate (including Fluent nHibernate which shares many of the same objectives), the cruft and development workflow encumbrances of code generator based O/RM tools, and the proprietary feel of Microsoft's LINQ-to-SQL and LINQ-to-Entities (which are also code generators).

The Gemli.Data sub-project is currently the primary focus at this time. It aims to meet the following objectives:
  • Database O/RM to provide lightweight persistence support for C# projects with no code generators required, no XML-based mappings required, no attributes required, and no code-based setup required in basic scenarios where inferences cannot be made by looking at the POCO with .NET reflection.
    • Of course, it also aims to support and work well with code generators, support XML-based mappings, work best by default with attributes-based mapping details, and allow for relatively easy manipulation of mappings in explicit code.
    • So, essentially, the mappings are reflection-driven, but can instead be XML-driven or code invocation driven.
    • All of the reflection-driven (attributes-driven) mappings are cached to optimized CLR objects in the same way an XML-driven implementation should be.
    • You do not have to choose one mapping paradigm or another. XML mappings can override attributes/reflection, and manual code can override further.
  • Complete persistence capacity with O/RM-neutral POCO objects with as minimal code or learning curve required as feasibly possible.
    • Two or three lines of code to wrap a POCO object, reference a database provider, and persist the POCO object as a DB record.
  • Reference field mappings by CLR property/field or by DB column name, whichever is convenient.
    • Example: myQuery.WhereProperty["ID"].IsEqualTo(2) might be the same as myQuery.WhereColumn["mytable_id"].IsEqualTo(2)
  • Support for SQL stored procedures based CRUD operations
  • Support one-to-one relationships, one-to-many relationships, many-to-many relationships, and many-to-one relationships
    • with n-level or infinite deep-loading
      • including with SQL stored procedures for per-table CRUD operations (using client-side joins)
  • Support all ADO.NET compatible (requires DbFactory) database providers that speak ANSI SQL and support bi-directional parameters
  • Expose a basic query mechanism for loading filtered collections of data entities.
  • Support pagination
  • Does not replace the usefulness of other O/RM solutions or raw ADO.NET in scenarios that have complex functional or performance requirements
    • If you need significantly more detailed control of your mappings or mapping behavior than the basics, or you need the raw performance of a highly optimized code generator, Gemli.Data probably isn't for you
    • Not a do-it-all solution like nHibernate tries to be, only a do-the-obvious data persistence solution.
    • No support for aggregate functions (i.e. SUM) or for GROUP BY, or similar advanced queries. There are always workarounds using raw ADO.NET and the load-by-DataRow feature of the DataModel class. ;)
    • No server-side joins or field-picking
    • Only the Table/Class maps are strongly typed when declaring queries, whereas field names in query conditions must be identified with strings
  • Minor data utilities
    • Example: Convert a POCO object to a DataRow with one line of code: var dr = new DataModel<MyPoco>(myObject).Convert.ToDataRow();

Example of Gemli.Data code:
using Gemli.Data;
using Gemli.Data.Providers;

public class SamplePoco
{
    public int ID { get; set; }
    public string SampleStringValue { get; set; }
    public decimal? SampleDecimalValue { get; set; }
}
public void CreateAndDeleteEntityTest()
{
    ProviderDefaults.AppProvider = new DbDataProvider(
        System.Data.SqlClient.SqlClientFactory.Instance, TestSqlConnectionString); 

    var poco = new SamplePoco { SampleStringValue = "abc" };
    var model = new DataModel<SamplePoco>(poco); 
    model.Save();

    // now let's load it and validate that it was saved
    var mySampleQuery = DataModel<SamplePoco>.NewQuery()
        .WhereProperty["ID"] == poco.ID;
    model = mySampleQuery.SelectFirst();
    poco = model.Entity; // there and back again

    model.MarkDeleted = true; 
    model.Save();
}

Same example, with more comments and some test assertions:
using Gemli.Data;
using Gemli.Data.Providers;

// attributes only used where the schema is not inferred
// inferred: [DataModelTable(Schema = "dbo", Table = "SamplePoco")]
public class SamplePoco
{
    // Inferred: [DataModelColumn("ID", IsPrimaryKey = true, IsIdentity = true, 
    //     IsNullable = false, DataType = DbType.Int32)] // note: DbType.Int32 is SQL type: int
    // Inferred as IsIdentity = true because it's an int or long, isn't nullable, is name either
    // "ID" or "{classname}ID"/"{classname}_ID", and no other properties are IsPrimaryKey = true.
    public int ID { get; set; }

    // inferred: [DataModelColumn("SampleStringValue", IsNullable = true, 
    //     DataType = DbType.String)] // note: DbType.String is SQL type: nvarchar
    public string SampleStringValue { get; set; }

    // inferred: [DataModelColumn("SampleDecimalValue", IsNullable = true, 
    //     DataType = DbType.Decimal)] // note: DbType.Decimal is SQL type: money
    public decimal? SampleDecimalValue { get; set; }
}

[TestMethod]
public void CreateAndDeleteEntityTest()
{
    var sqlFactory = System.Data.SqlClient.SqlClientFactory.Instance;
    var dbProvider = new DbDataProvider(sqlFactory, TestSqlConnectionString);
    ProviderDefaults.AppProvider = dbProvider; 

    // create my poco
    var poco = new SamplePoco { SampleStringValue = "abc" };

    // wrap and auto-inspect my poco
    var dew = new DataModel<SamplePoco>(poco); // data entity wrapper

    // save my poco
    dew.Save(); // auto-synchronizes ID
    // or...
    //dbProvider.SaveModel(dew);
    //dew.SynchronizeFields(SyncTo.ClrMembers); // manually sync ID

    // now let's load it and validate that it was saved
    var mySampleQuery = DataModel<SamplePoco>.NewQuery()
        .WhereProperty["ID"] == poco.ID; // poco.ID was inferred as IsIdentity so we auto-returned it on Save()
    var data = mySampleQuery.SelectFirst(); 
    // or .. DataModel<SamplePoco>.Load(mySampleQuery);
    // or .. dbProvider.LoadModel(mySampleQuery);
    Assert.IsNotNull(data); // success!

    // by the way, you can go back to the POCO type, too
    SamplePoco poco2 = data.Entity; // no typecast nor "as" statement
    Assert.IsNotNull(poco2);
    Assert.IsTrue(poco2.ID > 0);
    Assert.IsTrue(poco2.SampleStringValue == "abc");

    // test passed, let's delete the test record
    data.MarkDeleted = true; 
    data.Save();

    // ... and make sure that it has been deleted
    data = dbProvider.LoadModel(mySampleQuery);
    Assert.IsNull(data);

}

And by the way, if you want your POCO objects to inherit DataModel instead of wrapping your POCO objects with DataModel<T>, that works too, but you'll need to always update your inner data dictionary on all of your property getters and setters. The downside of working this way is obvious--you have more code you have to maintain, and it gets more difficult to maintain changes to fields accurately. But the biggest advantage of working this way is performance. So if you use Gemli.Data with a code generator, this might work better for you.

Gemli.Data supports strongly typed collections and multiple records, too, of course.

var mySampleQuery = DataModel<SamplePoco>.NewQuery()
    .WhereColumn["SampleStringValue"].IsLike("%bc");
var models = dbProvider.LoadModels(mySampleQuery);
SamplePoco theFirstSamplePocoEntity = models.Unwrap<SamplePoco>()[0];
// or.. SamplePoco theFirstSamplePocoEntity = models[0].Entity;

Gemli.Common

Additional little gems are buried in the solution, such as XML encoding utilties and time-saving generic(<T>) serializer wrappers for converting data to and from XML, binary, and JSON with only a couple lines of code.

Gemli.Web

Gemli.Web is more of a placeholder sub-project, as the scope of Gemli.Web has not been determined but will expand into something significant for web developers, including a rich Javascript framework for working with ASP.NET with minimal server-side dependencies (i.e. some rich extensions to jQuery, some client-side MVC tools, etc). Gemli.Web will at least include (in time) some important client-side bindings for working with Gemli.Data on the server, both for Javascript as well as native web runtimes such as Silverlight. None of this is scoped out and committed yet, however, as this initiative has not yet been started.

Requirements

The assemblies require .NET Framework 3.5. Some effort is made to make Gemli neutral to vendor-specific features, so it should work in Mono, but this has not been tested beyond the most basic smoke tests.

For the source code and its tests you need Visual Studio 2008, and if you don't have Team Suite you'll be okay but might have to deal with some annoying messages in Visual Studio, or perhaps unit tests won't work. Otherwise, for the tests you also need SQL Server 2008 Express installed with the database server name of ".\SQLExpress" which is the default for SQL Server Express.

The Tentative Plan

Here's the want list / feature plan (very much incomplete):

======
 O/RM
======

------
 NEXT
------
[X] - Pagination
[~] - More DB deep saving/deep loading tests, namely n-level deep saves/loads.
[ ] - More tests for serialize/deserialize mappings to/from XML

--------
 FUTURE
--------
[ ] - SQL optimizations for deep joins
[ ] -> with tests
[ ] - Non-SQL persistence, i.e. flat files

--------------
 NICE-TO-HAVE
--------------
[ ] - SQL Server metadata conversion to deserializable XML
[ ] -> with tests
[ ] - IQueryable (limited LINQ support)
[ ] - Complete LINQ support

========
 VS SDK
========

FUTURE
[ ] - Build-time data mappings validation (SQL Server)
[ ] - VS project integration with IntelliSense injection
      for live-generated queries' field references

=====
 Web
=====

--------------
FUTURE
--------------
ASP.NET MVC tools
- Fully reflected URL route inferences: Trivialize URL routes by just using the paths as controller 
  method signature match. Public controllers' public methods should, after all, not be public 
  unless they are intended to be web-exposed. Controller method (action) overloads support:
  Strings vs. ints vs. floats should be auto-detected using parsability. Consider some reserved 
  words like "edit" as being appended to action name(??) or identify some other convention 
  that facilitates CRUD-equivalent actions without parameterizing action methods. Route 
  failures should reveal best-match controller's routes.
  End goal: Zero configuration for URL routes, and absolutely no 404s that have no useful
  developer HTML content.

Gemli.Data CRUD on the browser
- Extensions for ASP.NET Web Forms and ASP.NET MVC for 
  wrapping DataModels to JSON with DataModel metadata
- Javascript layer for lightweight data and query I/O
- Silverlight layer

Last edited Dec 10, 2011 at 7:18 AM by stimpy77, version 157