Thursday, May 14, 2009

TechEd 2009 - Day 4

Thursday ended up being a great day for sessions. The two top sessions for me were "Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack" and "SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services"

Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack
In this session Chris Kabat and Naresh Koka demonstrated the various ways of exchanging data between SAP and other Microsoft technologies.

Why would you want to extract data from SAP - can't you do everything in SAP?
SAP systems tend to be mission critical, sources of truth or systems of records. Bottom line they tend to be very important. However, it is not practical to expect that all information in the enterprise is contained in SAP. You may have acquired a company that used different software, you may have an industry specific application where a SAP module doesn't exist or you may have decided that building an application on a different platform was more cost effective. Microsoft is widely deployed across many enterprises making it an ideal candidate to interoperate with SAP. Microsoft's technologies tend to be easy to use, quick to build and deploy and generally have a lower TCO. (Total Cost of Ownership). Both Microsoft and SAP have recognized this and have formed a partnership to ensure of interoperability.

How can Microsoft connect with SAP?
The 4 ways that they discussed included:
  • RFC/BAPI calls from .Net
  • RFC/BAPI calls hosted in IIS
  • RFC/BAPI calls from BizTalk Server
  • .Net Data Providers for SAP

Most of their discussion involved using the BizTalk Adapter Pack 2.0 when communicating with SAP. In case you were not aware, this Adapter Pack can be used in and outside of BizTalk. They demonstrated both of these scenarios.

Best Practice
A best practice that they described was using a canonical contract(or schema) when exposing SAP data through a Service. I completely agree with this technique as you are abstracting some of the complexity away from downstream clients. You are also limiting the coupling between SAP and a consumer of your service. SAP segments/nodes/field names are not very user friendly. If you wanted a Sharepoint app or .Net app to consume your service, you shouldn't have to delegate that pain of figuring out what AUFNR(for example) means to them. Instead you should use a business friendly term like OrderNumber.

.Net or BizTalk, how do I choose?
Since the BizTalk adapter pack can be used inside or outside of BizTalk, how do you decide where to use it? This decision can be very subjective. If you already own BizTalk, have the team to develop and support the interface then it makes sense to leverage those skills and infrastructure that you have in house to build the application with BizTalk. You can also build out a service catalogue, using this approach, that allows other applications to leverage these services as well. The scale out story in BizTalk is very good so you do not have to be too concerned with a service that will be sparingly used initially and then mutates into a mission critical service that is used by many other consuming applications. Next thing you know this service can't scale and your client apps have now broke because they cannot connect. Another benefit of using BizTalk is the canonical example that I previously described. Mapping your canonical schema values to your SAP values is every easy. All you have to do is drag a line from your source schema to your destination schema.

If you do not have BizTalk, or resources to support this scenario then leveraging the Adapter pack outside of BizTalk is definitely an acceptable practice. In many ways this type of a decision comes down to your organizations' appetite to build vs buy.

From a development perspective the meta data generation is very similar. Navigating through the SAP catalogue is the same no matter whether you are connecting with BizTalk or .Net. The end result is that you are going to get schemas generated for the BizTalk solution vs code for the .Net solution.


SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services
Clemens Vasters' session on .Net Services was well done. I saw him speak at PDC and he didn't disappoint again. He gave an introduction demo and explanation about the relay service and direct connect service. Even though these demos are console applications, if you sit back and think about what he is demonstrating it blows your mind. Another demo he gave involved a blog site that he was hosting on his laptop. The blog was publicly accessible because he registered his application in the cloud. This allowed the audience to hit his cloud address over the internet, but it was really his laptop that serviced the web page request. As he put it, "he didn't talk to anyone in order to make any network configuration arrangements". This was all made possible by him having an application that established a connection with the cloud and listened for any requests being made.

As mentioned in my Day 1 post, the .Net Services team has been working hard on some enhancements to the bus. The changes mainly address issues that "sparsely connected receivers" may have. What this means is that if you have a receiver that has problems maintaining a reliable connection to the cloud, that you may need to add some durability.

So how do you add durability to the cloud? Two ways (currently):

  • Routers
  • Queues

Routers have a built in buffer that will continue to retry and connect to the downstream end point. Where as a Queue will persist the message, but the message needs to be pulled from the endpoint. So Routers push and Queues need to be pulled from.

Another interesting feature of the Router is dealing with bandwidth distribution scenarios. Lets say you are exchanging a lot of information between a client who has a lot of bandwidth (say a T1) and someone with little bandwidth (say a 128 kbps). The system that has a lot of bandwidth will overwhelm the system with the little bandwidth. Another way to look at this is someone who "drinks from a fire hose". So by using buffers, the Router is able to effectively deal with this unfair distribution of bandwidth by only providing as much data to the downstream application as it can handle. At some point the downstream endpoint should be able to catch up with the upstream system once the message generation process starts to slow down.

Routers also have a multi-cast feature. You can think of this much like a UDP Multicast scenario. Best efforts are made to distribute the message to all subscribers, but there is no durability built in. However, I just mentioned that a Router that is configured to have one subscriber, has the ability to take advantage of a buffer. There is nothing from stopping you from multi-casting to a set of routers and therefore you are able to achieve durability.

A feature of Queues that I found interesting was the two modes that they operate in. The first is a destructive receive where the client pulls the message and it is deleted...no turning back. The second mode has the receiver connecting, locking the message so that it can cannot be pulled from another source, once the message is retrieved, the client then issues the delete command when its ready.

Pricing
I am sure it happens in every Azure presentation, and this one was no different, that pricing comes up. We didn't get any hard facts, but were told that Microsoft's pricing should be competitive with other competing offerings. Both bandwidth and cost per message will be part of the equation. So when you are streaming large messages you are best off looking at direct connections. Direct connections are established initially as a relay, but while the relay is taking place, the .Net Service bus performs some NAT probing. Approximately 80% of the time the .Net Service bus is able to determine the correct settings that allow for a direct connection to be established. This improves the speed of the data exchange as Microsoft no longer becomes an intermediary. It also reduces the amount of money that the data exchange costs since you are connecting directly with the other party.

Workflow
At this point, it looks like Workflow in the cloud will remain in CTP mode. The feedback that the Product Team received, strongly encouraged them to deliver .Net 4.0 workflow in the cloud instead of releasing Cloud WF based upon the 3.5 version. The .Net Services team is trying to do the right thing once; so they are going to wait for .Net 4.o workflow to finish cooking before the make it a core offering in the .Net Service stack.

No comments: