Conversion error in custom entity (AX 2012)

When Microsoft designed the Data Import Export Framework in AX2012 it provided some entities out of the box. In many scenarios, you will be missing an entity. You can create your own entities in the development environment from scratch or you can use the wizard which will create the basic objects required for your new entity. Sometimes you might run into errors and you should start troubleshooting. This post will provide a walkthrough of how to create an entity. When finished this entity has an error due to conversion of an enumeration field. A solution for this problem is also provided at the end of this blog.

Create a new entity

Suppose you are in the need of a new entity based on the Inventory posting setup. The next steps should be taken to create a new one based on the table InventPosting.

  1. Start Data Import Export Framework > Common > Create a custom entity for data import/export.
  2. Fill the value InventPosting in the Table name field or select this table using the lookup. Then click Next.

  3. Specify the InventPosting value in the field Display menu item name. This is the menu item that will be used when you want to drill down to the target data from e.g. the staging view. Click Next.

  4. Select the fields that should be supported within the new entity. Continue and finish the wizard.

  5. During the creation of the entity you might be asked to create relations on the new staging table. Always answer this question with Yes. If you choose No, an important relation to the target table might be missing, which could cause the execution only able to insert records and not update existing records.

  6. During the process you might also see database synchronization starts. Don’t abort this process. It could lead to wrong string lengths in the DMF tables which holds the field mappings.
    The wizard created a new private project with all minimum required objects for the entity. For reference fields based on record-ID references, fields of a string type are created in the staging table. To be able to map the correct value, generateXXXXX methods are created to be able to handle the conversion.
  7. In this example the generateLedgerDimension method has been implemented fully with the correct coding. This might not be the case in every version of Data Import Export Framework in AX 2012. Compile the full project to see possible errors or open tasks.

  8. It appears that the method generateCategoryRelation has not been filled with the required coding. It has an //TODO section stating you have to implement the correct coding.

  9. Next to the coding, you also need to implement the DMFTargetTransFieldListAttribute correctly. This will provide knowledge to the entity which field(s) are used as input to find a record ID of the referenced table. The way to specify the fields are different in AX 2012 R3 and AX 2012 R2. Have a look at my blog post Change in Data import export framework where this has been explained.
    The complete method might look like the next screenshot when you have completed the task.

  10. In the previous method, the fields for input are defined, also the return field must be specified in the getReturnFields method. As there is no //TODO section in this method created by the wizard, you might overlook this part, causing the outcome of this method not linked automatically to the target field. So add the coding for the return field for the Category relation.

  11. Compile the project, synchronize the tables, and run an Incremental or full CIL compilation.

The entity is now ready to be set up in the Target entities form and use it in a Processing group.

Conversion error

As told in the introduction, this entity will raise errors. This is at the time of copying data to the target. What is the exact error? What causes it? How to solve this? This will be explained below.

For the test, I did create a very small CSV file with some records that could be used in the demonstration company USMF.

The source to staging was executed without problems. Note that the correct string values for the Account type are inserted in the staging table.

When you want to execute the Copy data to Target step, the job fails. The next error will be visible.

The error is stating that a nvarchar (string) field is not possible to convert to and int (integer). This is related to the enumeration fields in this table, for sure. But I learned that the enumeration conversion is working with the label, enumeration text and value number, so why is this failing?

SQL statement: SELECT T1.ITEMRELATION,T1.CUSTVENDRELATION,T1.TAXGROUPID,T1.INVENTACCOUNTTYPE,T1.ITEMCODE,T1.CUSTVENDCODE,T1.COSTCODE,T1.COSTRELATION,T1.CATEGORYRELATION,T1.LEDGERDIMENSION,T1.INVENTPROFILETYPEALL_RU,T1.INVENTPROFILETYPE_RU,T1.INVENTPROFILEID_RU,T1.SITECODE_CN,T1.SITERELATION_CN,T1.RECVERSION,T1.PARTITION,T1.RECID,T2.COSTRELATION,T2.CUSTVENDRELATION,T2.INVENTPROFILEID_RU,T2.ITEMRELATION,T2.SITERELATION_CN,T2.TAXGROUPID,T2.DEFINITIONGROUP,T2.ISSELECTED,T2.TRANSFERSTATUS,T2.EXECUTIONID,T2.ECORESCATEGORY_NAME,T2.ECORESCATEGORYHIERARCHY_NAME,T2.COSTCODE,T2.CUSTVENDCODE,T2.INVENTACCOUNTTYPE,T2.INVENTPROFILETYPE_RU,T2.INVENTPROFILETYPEALL_RU,T2.ITEMCODE,T2.LEDGERDIMENSION,T2.SITECODE_CN,T2.COSTGROUPID,T2.RECVERSION,T2.PARTITION,T2.RECID FROM INVENTPOSTING T1 CROSS JOIN DMFINVENTPOSTINGENTITY T2 WHERE ((T1.PARTITION=?) AND (T1.DATAAREAID=?)) AND ((T2.PARTITION=?) AND ((((((((((((((T2.RECID=?) AND (T1.SITERELATION_CN=T2.SITERELATION_CN)) AND (T1.SITECODE_CN=T2.SITECODE_CN)) AND (T1.INVENTPROFILEID_RU=T2.INVENTPROFILEID_RU)) AND (T1.INVENTPROFILETYPE_RU=T2.INVENTPROFILETYPE_RU)) AND (T1.INVENTPROFILETYPEALL_RU=T2.INVENTPROFILETYPEALL_RU)) AND (T1.COSTRELATION=T2.COSTRELATION)) AND (T1.COSTCODE=T2.COSTCODE)) AND (T1.TAXGROUPID=T2.TAXGROUPID)) AND (T1.CUSTVENDRELATION=T2.CUSTVENDRELATION)) AND (T1.CUSTVENDCODE=T2.CUSTVENDCODE)) AND (T1.ITEMRELATION=T2.ITEMRELATION)) AND (T1.ITEMCODE=T2.ITEMCODE)) AND (T1.INVENTACCOUNTTYPE=T2.INVENTACCOUNTTYPE))) ORDER BY T1.INVENTACCOUNTTYPE,T1.CUSTVENDCODE,T1.CUSTVENDRELATION,T1.ITEMCODE,T1.ITEMRELATION,T1.TAXGROUPID,T1.INVENTPROFILETYPEALL_RU,T1.INVENTPROFILETYPE_RU,T1.INVENTPROFILEID_RU

After debugging and looking at the SQL statement it is noticed that it is not caused by the conversion from the staging to the target value for enumeration fields, but the attempt to find an existing record. AX tries to join the target and staging table record in a query to find a possible record to update instead of creating a new one. This join is build based on the InventPosting relation on the staging table. Below you will see the fields marked which are incorrect. Why?

This relation is automatically created by the wizard based on the primary index of the target table. It used the replacement key index in case of a record ID index and if this replacement key is unique.

But now the 64000 dollar question: How to solve it?

It is good to know that there are two attempts on finding an existing record. The first attempt is a query which is code based created where the staging and target table are linked using the table relation to the Target table. Just removing this relation will not solve the problem. It will then think there is no relation, so only records will be inserted at any time. The target table will then raise duplicate key errors.

Removing the incorrect fields from the relation is also not a good idea. It will then find the wrong existing records and will cause updating these existing records in stead of creating new records. This is the case when e.g. the values for Item and Customer relation are the same, but the only difference is in the Account type selection.

So now we have to know how and when the second attempt is executed and how this works. If the first attempt does not find any existing record, it will then find a record in the target table based on the replacement key of the target table. If there is no replacement key, it will try to find an existing record based on the primary index if this is not containing the Record ID field.

So we have to cheat AX to have no record found in the first attempt. For that, we need to delete all field relations and create an impossible relation. E.g. a record ID from the staging table will be linked to the relation type field on the target table. Record IDs usually start with high numbers, so it will never find an existing record with low relation type values. In this way, the first query method will have no existing record found and the second attempt is working correctly for this table.

If you make the changes and save, compile the table, you can rerun the target execution step which will now give the correct outcome without error.

Before you implement a similar cheat on your entities, make sure you test it carefully in a separate environment to make sure it will work correctly.



I do hope you liked this post and will add value for you in your daily work as a professional. If you have related questions or feedback, don’t hesitate to use the Comment feature below.


That’s all for now. Till next time!

T

6 replies
  1. Kristina
    Kristina says:

    Hi Andre,

    Was wondering if the photos on this article can be fixed? Looks like the images are corrupted. Thank you!

    Reply
  2. Lora
    Lora says:

    Hi Andre
    Thank you for the article, it helped me a lot to start with my prj.
    I need to import ledger journal lines from file . This import should create new journals (there is a field indicating which lines in the file should be grouped in the same journal). So I was wondering how to do this correctly using DIXF? I can generate JournalNum from Number sequence and create new journals in my cutom generateJournal() method and linking to JournalNum in getReturnFields but now I wonder :
    a) how can I make sure that all the new journals and lines will be rolled back in any of the lines fail import to target? (all-or-nothing scenario)
    b) how can I link my staging with the target LedgerJournalTrans so that users can re-process failed lines (in case users would want to import what they can and then fix errors in the failed lines and re-import them again. In which case they should go into a relevant journal if was already created).
    I don’t know which scenario is doable with DIXF ? The (b) requires some complex link between staging and target and I don’t know how to do it(I can’t link them by standard JournalNum+Voucher+LineNum since I generate JournalNum , i.e it does not exist int the imported file). The (a) requires transactional integrity of the whole import which I don’t know how to implement.
    Thank you very much for your help

    Reply
    • André Arnaud de Calavon
      André Arnaud de Calavon says:

      Hi Lora,

      I could not approve and review blog replies while I was in a foreign country last week. Here are briefly some hints and thoughts:
      a) DIXF is not designed to roll back all records. It is designed to process what is possible and have lines with errors available for review. You can consider a customization.
      b) The link is based on a table relation between the staging and the target tables. As you create journals, you mentioned you don’t have the created journal number in your file. That is indeed a challenge. You can try to link on the voucher and line number in case these are unique.

      You can also consider importing all lines in a new custom table and from this table use custom code (e.g. as batch job) to copy all the lines to new journal tables and their lines. Then you have your own control when to commit or roll-back data. This will require another way of error handling in case of errors.

      Reply
  3. Lora
    Lora says:

    I really appreciate your help and the detailed answer.
    Fake target table seems to be the only feasible solution

    Reply
  4. Lora
    Lora says:

    Sorry, was still editing the message when accidentally posted it. So the first line “Hi André. No problem at all. ” went missing (

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.