Audi RS5 Sportback



Het zou zomaar kunnen de Audi A5 Sportback in de RS uitvoering.

Het kan de 8 cilinder zijn met 450pk ,hoesjes samsung galaxy note 2, maar een andere mogelijkheid is de 3.0TFSI met 408pk. Ach het blijft even gissen met deze impressie,Samsung s4 hoesjes, maar boeiend is het wel.

Lekkere dikke 4 persoons coupe met 20 inch velgen eronder.

Commentaires

Audi A1 e-tron (gedetailleerd)

,Samsung S5 hoesjes



Hier de Audi A1 e-tron open gewerkt.
Dus wil je veel informatie hebben over deze e-tron uitvoering,hoesjes samsung galaxy s3? Kijk dan naar de afbeeldingen hoe Audi dit voor elkaar heeft.


De technieken staan mooi omschreven bij de afbeeldingen.

Commentaires

Audi A4 Avant richt ravage aan bij Veemarktstraat Groningen



Een onbekende automobilist heeft in de vroege zondagmorgen een ravage aangericht in de Veemarktstraat in Groningen,iPhone hoesjes. De bestuurder verloor in een bocht de macht over het stuur en botste tegen een geparkeerde auto. Door de botsing schoot de geparkeerde auto door tegen een gevel van een bedrijfspand,Samsung galaxy s5 hoesje.

De automobilist kwam vervolgens met zijn voertuig tegen een zuil tot stilstand. De man in de auto werd aangesproken door getuigen, maar wilde niet wachten tot de politie kwam. Hij zette het op een rennen richting de Winschoterkade. Agenten hebben nog uitgekeken naar de bestuurder, maar die was nergens te vinden. De politie onderzoekt het incident en probeert door middel van getuigenverklaringen te achterhalen wie de bestuurder was.

Bron (meer foto’s)

Commentaires

Audi TTS Yellow Black

,iPhone hoesjes



tt-s-yellow-black-1

Geef me geel en zwart, oef dat klinkt niet stoer.
Give me Yellow and Black, zo en dat valt lekker. Hier een lekkere snelle TTS Coupe.
Met zwart dak en zwarte velgen, zwarte spiegelkappen en een achterspoiler van de orginele Audi accessoires

tt-s-yellow-black-2

Hier de zwart gespoten 19 inch velg,samsung galaxy s4 youtube.

tt-s-yellow-black-3

Hier zie je goed de dak en achterklep en de spoiler in zwart, het geeft een mooi contrast.

tt-s-yellow-black-4

Commentaires

Rijtest Audi A8 Hybrid



De top in de Limousine de A8 is er nu ook in Hybrid uitvoering op de markt.
En dat is goed nieuws voor de milieu freaks en bedrijven,note 3 hoesjes, de 2.0TFSI motor die samen met de elektromotor een maximaal vermogen van 245 pk en een koppel van 480Nm genereren.
Hij heeft dan een gemiddeld verbruik van slechts 6,3 l/100 km en een gemiddelde
CO2 uitstoot van 147 gram/km.

Met de Audi A8 hybrid,Hoesje Loods, ben je in staat om met een snelheid van 60 km/h
over een afstand van drie kilometer uitsluitend gebruik te maken van de elektrische aandrijving.
De top in de elektrische modus bedraagt 100 km/h.

Hier de rijtest met de A8 Hybrid van Autoweek.

Via: Autoweek

Commentaires Trackback / Pingback (1)

Video Opening nieuw Audimotoren testcentrum Neckarsulm

,Hoesjes iPhone 6



Audi opent in Neckarsulm hun nieuwe testcentrum voor motorenbouw.
Een prachtig mooi centrum waar de technische mensen weer volop aan de slag kunnen met het ontwikkelen en verbeteren van Audi’s motoren voor de toekomst,samsung galaxy s3 youtube.

Commentaires

Audi werkt aan nieuwe veiligheids- #038; assistentiesystemen



U kent ze onderhand wel, al die discussies onder auto bestuurders: steeds meer assistentiesystemen? Waar gaat dat allemaal heen? Wordt de bestuurder uiteindelijk overbodig? Hebben we straks nog wel plezier aan het autorijden? – Bij de internationale Technik-Workshop “Audi active safety“ is vooraf al bepaald dat de auto door een mens bestuurd zal blijven en zijn ervaring gewenst is. Bestuurder ondersteunende assistentiesystemen zijn een aanbod, om bij het omgaan met een auto op zeker te kunnen gaan.

Steeds maar weer menselijke fouten
In hedendaagse nieuwe auto’s worden in toenemende mate assistentiesystemen aangeboden. Het is echter moeilijk een rangvolgorde voor de systemen vast te stellen m.b.t. hun nut. Audi volgt een basisregel hierbij: Geen betutteling van de bestuurder! Ingrijpen zullen de aanwezige systemen eerst dan, als de bestuurder in kritieke situaties niet snel genoeg of helemaal niet reageert. Ten laste van menselijke fouten is negentig procent van de verkeersongevallen terug te voeren. Ook daarom wordt bij Audi aan de ontwikkelingen van deze systemen, die de actieve veiligheid verbeteren, onder hoge druk gewerkt.

Bij deze workshop ging het er minder om, de assistentiesystemen te presenteren, die in de actuele modellen op de markt inmiddels aangeboden worden; zoals het systeem, dat de gewenste afstand tot de auto er voor in stand houdt en d.m.v. het automatisch remmen de botsing mogelijkheid verminderd. Ook het systeem om in de eigen rijstrook te blijven, of bij het wisselen van rijstrook is inmiddels gemeengoed geworden. Ook de parkeerassistent, die zelfs in kleine parkeerplaatsen voor het vlot parkeren zorgt, is niet nieuw meer.

Verbazend trucje met de aanhanger
De Workshop “Audi active safety“ is er vooral om de wereld kennis te laten maken met de ideeën van de bedenkers in Ingolstadt v.w.b. nieuwe assistentiesystemen, o.a. hoe de rijveiligheid en het comfort verbeterd kunnen worden. Indrukwekkend is bijvoorbeeld de manier waarop een nieuwe assistent, het achteruit parkeren van een auto met een één assige aanhanger zelfstandig uitvoert. De bestuurder hoeft slechts enkel de rijrichting of de uitwendige bocht vooraf in te stellen. Aansluitend moet deze het stuur loslaten. Alles gaat verder zonder ondersteuning van de bestuurder en geheel probleemloos.

Automatisch parkeren nog verder uitgebreid
Nauwelijks minder verbazend is het zelfstandig parkeren van de Audi in een zeer smalle garagebox, die in – of uitstappen nauwelijks toelaat. Ook de bestuurder is vooraf uitgestapt. Deze start het zelfstandig parkeren d.m.v. Het drukken op een speciale knop van de afstandsbediening van de auto. Als een spookauto rijdt de Audi precies in het midden van de garagebox en komt tot dan tot stilstand,Samsung S5 hoesjes, motor en verlichting gaan ook automatisch uit, geopende ramen worden gesloten en de deuren gaan op slot – klaar!


Minder zenuwachtig Stop&Go
Stop&Go! Een nerveus geduldspelletje voor de bestuurder. De file-assistent zorgt voor rust. Gewillig volgt de Audi de auto voor hem, zonder dat de bestuurder gas moet geven of remmen als de rit langzamer gaat of stoppen nodig is. Zelfs het weer wegrijden regelt de file-assistent. Tevens blijft de auto, geleidt door de rijstrook markeringen, precies in het spoor van de auto voor hem; zelfs in lange bochten.



Kruisingen: altijd oppassen
Ook wordt bij Audi gewerkt aan een kruising-assistent, die de typische zij-ongevallen of bij uitritten moet vermeiden. Wielsensoren die samen met een breedbeeld objectief van een videocamera een van op zij naderend voertuig signaleren, waarschuwen in meerdere stappen. De gedachte is, dat op een dag dankzij de draadloze communicatie van voertuigen onderling, zo en aanrijding vermeden kan worden.

Op de laatste seconde..
Welke waarde een automatische noodstop heeft bij een acuut aanrijdingsgevaar, toont de test met een dummy voetganger. Gesimuleerd wordt het zich plotseling tussen twee auto’s door op de weg begeven van een een kind; iets wat typisch iets is vn kinderen. Zo bliksemsnel,Hoesjes iPhone 5s, als het veiligheidssysteem de reddende noodstop uitvoert, kan zelfs een geroutineerde auto bestuurder niet reageren.

Mechanische en hydraulische bouwdelen worden in toenemende mate van elektrische componenten verdrukt. Dat spaart vooral plaats en gewicht. Zelfs het sturen en remmen doet de “By-wire“-Technologie zelfstandig als het moet, ook al kunnen de hedendaagse autobestuurders zich dat nu nog moeilijk voorstellen, dat er in de toekomst nog wel een stuur, maar geen stuurkolom meer zal zijn, en ook de remmen dan elektrisch aangestuurd worden. Met “By-wire“-Technologie is Audi intensief bezig. Bij de R8 e-tron worden stuurinichting en remmen al “via draad“ bediend.

Licht en zicht: vooruitgang naar de top!
Een ander thema waar Audi mee bezig is , is de lichttechnologie. Sinds jaren bewijzen de koplampen en achterlichten van Audi’s als dominante kenmerken vaan een Audi. Toonaangevend dragen hierbij de LED-dagrijverlichtingen-slangen en de LED-achterlichten bij. Bij steeds meer bouwtypen worden ook de LED-koplampen aangeboden.

Op het autolicht van de toekomst vestigden reeds meerdere showcars van Audi de aandacht. Matrix Beam-koplampen krijgen hun informatie via een camera, het navigatiesysteem en de sensoren. De intelligente achterlichten, waarvan de lichtintensiteit d.m.v. een lichtsensor wordt bewaakt, is onafhankelijk van het weertype altijd goed te herkennen.

Een vergelijkbare functie heeft het laser-mistachterlicht dat in staat is de achterop komende auto te waarschuwen dat waar de ‘veilige deze te dicht nadert. Daartoe kan een rode streep op het wegdek geprojecteerd worden die aangeeft waar de ‘veilige afstand-grens’ zich bevind. Zien en gezien worden blijven de belangrijkste regels voor de verkeersveiligheid.

Bron: Audi

Commentaires Trackback / Pingback (1)

—Monitor Dell PERC when running on Oracle Virtual Server 3.x–

In this article I will try to explain how to monitor your Dell Raid Controller with Cloud Control when running Oracle Virtual server 3.x (OVS3.x).

Some time ago a physical drive broke down on one of our Dell PowerEdge Servers. Not causing any problem because it was configured in a raid set using the Dell build in PowerEdge Raid Controller( hereafter PERC). However, we did not notice this breakdown until also a second physical disk broke down causing the whole raid-set getting unavailable.

The technical part of replacing disks and recovery has been done, but we left with the issue of not being notified of the first failure at all. So I started a quest on the web finding out how to prevent this from happening again.

First solution found was (of course) to install Dell Open Manage. Looks promising, but unfortunately it is not certified (and also not working) on a server running Oracle Virtual Server 3.x. which we use on almost all our hardware. I tried, but after installing the software the server refused to start after a reboot.

Next try: SNMP… Unfortunately all SNMP and PERC related information which can be found is based on the Dell Open manage software, which we could not use as described above…

Crawling on the web I finally bumped into some blog articles stating that the PERC is a branded Megaraid adapter. The same article mentioned something like MegaCLI (Megaraid Command Line Interface). Aha, a new hook into a possible solution?

Yep! With this information as a new starting point I was able to retrieve enough information to use the command line to query the PERC. And if it’s command line, we can script, and if we can script, we can monitor with Oracle Cloud Control (or any other monitoring tool).

OK, enough blabla, let’s walk through the steps required to get this thing moving.

First we need to download 2 small rpm’s to be installed on the server:
– Lib_Utils-1.00-09.noarch.rpm, which can be found here
– MegaCli-8.02.16-1.i386.rpm, which can be found on the support site of LSI. Download this zipfile which contains the RPM (and other tools for different OS’s).

    1. Login on your host as root and navigate to /tmp
    2. Install both rpm’s
[root@host tmp]# yum localinstall Lib_Utils-1.00-09.noarch.rpm –nogpgcheck
[root@host tmp]# yum localinstall MegaCli-8.02.16-1.i386.rpm –nogpgcheck
    1. Create a softlink in /usr/sbin to the MegaCli executable using 1 of these statements
[root@host tmp]# ln /opt/MegaRAID/MegaCli/MegaCli /usr/sbin/MegaCli
[root@host tmp]# ln /opt/MegaRAID/MegaCli/MegaCli64 /usr/sbin/MegaCli
    1. Test the functionality by executing the following command which should show the number of raid controllers in the system
[root@host tmp]# /usr/sbin/MegaCli -adpCount
    1. Execute the following command which will give the number of physical disks. The output will be used in a later stage configuring Cloud Control
[root@host tmp]# /usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number
  1. If the command above executes without error you can execute /usr/sbin/MegaCli –h to get a log of help-information which tells about the huge load of options.

When the part above is executed successfully we can proceed with the next part: Getting Oracle Cloud Control to monitor the PERC.

At this point I assume you already have an Cloud Control agent running correctly on the specific host. If not, you have to install and configure one before you continue.

In Cloud Control 12c we have a beautiful feature called Metric Extensions.

Quote from the documentation:

Metric Extensions enhance Enterprise Manager’s monitoring capability by allowing you to create new metrics to monitor conditions specific to your environment. These Metric Extensions can be added to any target monitored by Enterprise Manager. Once developed and deployed to your targets, the metrics integrate seamlessly with the Oracle-provided metrics.

This means that almost anything you can execute on a command line interface (CLI) and gives a formatted result can be used as metric. This can be at the OS-prompt, SQL, RMAN, ODI, Dell Open Manage, Microsoft SQL etc.

The development cycle for a metric extension looks like this:

Lifecycle of Metric Extension

Metric Extension Lifecycle

Since all commands to the PERC have to be executed as root, I decided (for simplicity) to use the Monitoring Credential facility in Cloud Control for this with the root account. You could also setup to use sudo and a specific useraccount, but that is beyond the scope of this blogpost.

In the next steps we will create a metric which will alert us when the number of available disks changes (i.e. a disk fails or is removed). Based on this example you should be able to create your own variants depending on the requirements.

  1. Log in in Cloud Control
  2. Navigate to <Setup><Security><Monitoring Credentials>
  3. Select the <Host> target type and click on <Manage Monitoring Credentials>.

Next you will see a list of all hosts in Cloud Control with 3 Credential Sets. We will use the set called “Host Credentials For Real-time Configuration Change Monitoring”.

  1. Select the required line (hostname-credentialset) and click <Set Credentials>.
  2. Fill in the username (root) and the corresponding password.
  3. Click on<Test and Save> to store the password.

When the security has been setup, we can start creating the Metric extension.

  1. Navigate to <Enterprise><Monitoring><Metric Extensions>.
  2. Click on <Actions><Create> to start the wizard which will assist you in creating the Metric extension.

On the first screen we set the general setting regarding this metric.

  1. Select the Target Type <Host>
  2. Give the Metric extension a name, I used ME$Raid_PD_Count
  3. Give the metric a usefull Display name, i.e. Raid Physical Disk count
  4. Set the adaptertype to “OS Command – Multiple Columns”
  5. Add a description if desired and leave the Collection schedule on default settings
  6. Click <Next> to proceed to the Adapter screen

The adapter screen defines how a specific query is executed A proper description of the options is on the right side of the screen.

    1. Since we have to execute a (very small) script the command we will use is ‘/bin/bash’
    2. Click on the small pencil behind the script box
      • As Filename we use “RaidPhDiskCount”
      • In the File Contents box paste the following line:
/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number
    • Click <OK>

You can notice that the “Script” textbox has been filled with “%scriptsDir%/RaidPhDiskCount”. Also has the script been added to the “Custom Files” on the left bottom of the screen.
If you take a closer look to the output generated earlier by “/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number” you will see it contains some text and a number, divided by a : (colon).

Number of Physical Drives on Adapter 0: 6
  1. Based on the above we put a : (colon) in de Delimiter field.
  2. Click <OK> to proceed to the Columns page

On the Columns page we have to define each column which will be existing in the output of our command. As we can see above the output contains 2 columns, separated by a colon.

For each column in the output we have to define if it is a key column or a data column (containing the measurement data). For a data column we can also specify the default thresholds for warning and critical level. Note: The suggested values for warning and Critical are no typo’s . We will correct this in a later stage.

  1. Click <Add><new metric column> on to add the first column
  2. In the Name box write Description, and the same goes in the Display Name box.
  3. The column Type should be Key Column and the Value Type is String
  4. Click <OK> to save
  5. Click <Add><new metric column> on to add a second column
  6. In the Name box write PhysicalDiskcount, Display Name will be “Physical Disks”
  7. The Column Type will be Data Column with Number as Value Type
  8. Comparison Operator should be set to <, warning level to 1 and critical to 0.7
  9. Change the Alert Message to “Number available disks on raid controller is degraded to %value%”
  10. Click <OK> to save, and <Next> to proceed to the Credentials page

On the Credentials page we select which credential set should be used to measure this specific metric. Earlier we did prepare the “Host Credentials For Real-time Configuration Change Monitoring” set for this.

  1. Select the “Specify Credential Set” radio button and select if the correct credential set if not done automatically.
  2. Click to go to the Test page

The Test page offers the possibility to test the metric and check the output. The metric can be tested against all targets of the correct type if required.

  1. Click <Add> and select 1 (or more) targets where you want to test the metric. Click <Select>
  2. Select the target you want to test against and click <Run test>.
  3. Cloud control will execute the test and present the results in the bottom half of the screen. If an error message is thrown you might use the button to go back in the wizard to correct. After correction get back to this page and retry.
  4. If you´re happy with the test results click <Next> to go to the review page.

As could be expected based on the name you can once more review all settings for this metric and click <Finish> to save and close.

When the Mextric Extension has been tested and saved the next step is to save it as “Deployable Draft”. From this point on, it cannot be modified anymore.

  1. Select the Metric Extension
  2. Click on <Actions><Save as Deployable Draft&gt,iPhone 5c hoesjes;

Once a Metric Extension has reached the Deployable Draft status, it can be deployed to 1 (or more) server to test it in real life.

  1. Select the Metric Extension
  2. Click on <Actions><Deploy to Targets…>to open the Deployment screen.
  3. Click on <Add>
  4. Select the target(s) where you want to deploy on and click <Select>
  5. Click <Submit> to start deployment

At this stage the metric is deployed to our server which means that every 15 minutes it is executed,Hoesjes Samsung Galaxy S5, the results are stored in the database and alerts can be generated. However, the metric needs some small tweaks to work properly. Remember we did set a warning level on 1, and critical on 0.7?

    1. In Cloud Control navigate to the homepage of the host involved.
    2. Click on <Host><Monitoring><Metric and collection settings>
    3. On this page you see an overview of the active Metrics on this host. Locate the Metric we just created.
Metric Extension setting

Metric Extension setting

    1. Find out how many physical disk this particular host contains by executing the following command at the specific host (as root)
/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number”

 

  • I want to be notified as soon the number of disks is lower as it should be (this means a disk broken or removed). In my opinion this is always a critical situation.However, Cloud Control requires that the warning and critical value are filled in and different. For this reason warning threshold should be equal to the number of physical disks, and the critical threshold 0.5 (half a disk :-)) lower. So, if you have 6 physical disks, the warning threshold is 6, and the critical 5.5.

 

The result of this is that, as soon as 1 disk is gone, the value is below the critical threshold which should generate a critical alert.

  1. Click <OK> to continue
  2. Click <OK>
  3. Done!

From this point on Cloud Control will monitor the PERC in your host every 15 minutes and generate an incident as soon something is wrong. Of course, you will need configuration to send out alerts to your mailbox, pager or ticketing system. but I assume (and hope) that this has been done already if you are already using Cloud Control

Cloud control Cloudcontrol Dell PERC Dell Raid MegaCli Megaraid OEM OEM12c Oracle Virtual Server OVS3 PERC

Commentaires

—Resolving deployment issues with Service Bus 12c – OSB-398016 – Error loading WSDL–

I was completely stuck with Service Bus 12c project deployment from JDeveloper to the Service Bus run time. Every deployment met with the same fate: Conflicts found during publish – OSB-398016, Error loading the WSDL from the repository:  The WSDL is not semantically valid: Failed to read wsdl file from url due to — java.net.MalformedURLException: Unknown protocol: servicebus.

I was completely lost and frustrated – not even a simple hello_world could make it to the server.

SNAGHTMLc3d51e6

Then, Google and Daniel Dias from Link Consulting to the rescue: http://middlewarebylink.wordpress.com/2014/07/17/soa-12c-end-to-end-e2e-tutorial-error-deploying-validatepayment/. He had run into the same problem – and he had a fix for it! Extremely hard to find if you ask me, but fairly easy to apply.

It turns out this is a known bug (18856204). The bug description refers to BPM and SB being installed in the same domain,samsung galaxy note 3 hoesje.

The resolution:

Open the Administration Console for the WebLogic Domain. From the Services node, select service OSGi Frameworks:

image

Click on the bac-svnserver-osgi-framework link. Note: if you run in production mode, you will now first have to create an edit session.

Add felix.service.urlhandlers=false in the Init Properties field for the configuration of this service. Then press the Save button.

image

If you run in Production Mode, you now have to commit the edit session.

Then,Galaxy note 3 hoesjes, in order have this modification make any difference, you have to restart the WebLogic (Admin) Server.

This resolved the issue for me – a weight was lifted of my shoulders. Thanks to Daniel from Link!

398016 bug deployment MalformedURLException service bus

Commentaires

—Emulate Cross Service Joins in SOA Suite with Table Functions and Database Adapter–

( septembre 24, 2014 at 10:37 ) · Filed under Non classé, ,

It was fairly difficult to come up with a title for this article that sort of covers the content. To me it is quite clear what this is about – but how to convey that in a title? Let me explain: today in our project we discussed the implementation of a data service. The service operation under scrutiny takes a city as input and returns a list all open orders from customers located in that city. Nothing very special there. The interesting complication lies in the fact that the customers are part of a different domain than the orders. This means – under our architecture guidelines – that we cannot create a single SQL query that joins together the customers table with the orders table. A database link to join the tables across databases is out of the question and even if these tables currently reside in the same database – such a join is not allowed. Different data domains are treated as independent entities and no direct dependencies between the two should be created. Every design has pass the check ‘will it still work if one of the domains involved were to be relocated to the cloud or be replaced by a third party application’.

The architecture is service oriented. Every domain exposes services that provide access to data and business logic. The implementation of these services and the underlying domain is encapsulated. Consumers of the domain services are unaware of the domain internals, therefore they have no dependencies on such internals and will not be affected if the internals change. For as long as the domain adheres to its service contracts, all consumers can continue to function. This even applies if the domain is moved to a different physical location or reimplemented using a COTS (commercial off the shelf) product.

So there we had it: a sound service oriented architecture with fairly strict guidelines and a clear business requirement. The composite service we were tasked with implementing would somehow have to make use of two domain services – one on the CRM domain and one on the SALES domain – to find the customers in the location specified and find all open orders for these customers.

image

The call to the CustomerService’s findCustomers operation would return a list of customer identifier values. What to do next? Loop over all identifiers and retrieve the orders for the customer identifier – merging all results returned by all calls to OrderServicer,iPhone 4s hoesje.retrieveOrdersForCustomers? Potentially making dozens or more calls to the OrderService? Or perhaps we could transfer the  list of customer identifiers to the OrderService and let it take care of getting all orders for all the customers in the list. But how can we implement this in an efficient manner? How do we prevent executing the query to fetch the orders as many times as there are customer identifiers?

It turned out to be quite simple to address this challenge. Using the Oracle SQL Table Function and the database adapter we can very easily create a SQL query that joins the orders table with the list of customer identifiers. Only a single query is executed against the SALES database and a single round trip suffices to get all order records. The whole approach is of course not as efficient as doing a straight join across the two tables, but in this service oriented context it is not bad at all.

Let’s take a look at the implementation.

Starting at the end

The final result will be like this: three SOA Suite composites are used, one for each of the three services from the original design. Two of these are Domain Services; these use a database adapter to retrieve data from their Domain data store – which happens to be a relational database in both instances.

image

When the CustomerOrderService is invoked to return all Open Orders for Customers located in the specified location, the BPEL process in the CustomerOrderService Composite will first invoke the CustomerService that – using its database adapter against the CRM database – will return a list of customer identifiers. Next, the BPEL process will use this list of identifiers as input in its call to the OrderService. This latter service is exposed by another SCA composite that has its own database adapter invoking a PL/SQL package. In the call to this package, the entire list of identifiers is passed in. Inside the package, a SQL query is performed (that joins the customer identifiers list to the ORDERS table); this query returns the Orders (for the specified customers and with the appropriate status). The important message is that only one query is performed against the ORDERS table in the SALES database. And regardless of the number of customers involved – it will always be a single query.

Even though the service discussed in this article stretches across two data domains and although joins across these domains are not allowed in SQL queries, we still only required two queries (rather than a single query for each customer). In addition: the consumer of the operation is none the wiser about the underlying structure or whereabouts of the data. Whether the composite service enlisted Cloud based resources, a file system or a number of relational data stores: it is completely hidden from view. As it should be. The remainder of this article demonstrates exactly how this was realized.

Implementation of the CustomerService

imageLet’s assume a very simple CRM system. A single database table called CUSTOMERS with just four columns. It is enough to serve the purpose of this article. A number of customer records is created in this table.

The CustomerService is implemented using a SOA Suite Composite Application; note: we could just as well have used the Service Bus in this case.

The composite exposes the service with the agreed upon CustomerService interface (described by a WSDL and associated XSD document). A Mediator component implements that interface and maps it to a database adapter service that has been configured to query from the CUSTOMERS table all those customer records that have the required value in their CITY column.

image

The database adapter configuration is fairly straightforward: the adapter performs a query against table CUSTOMERS and retrieves all records with a CITY value equal to the location parameter.

image

The Mediator maps the input and output messages specified in the CustomerService contract to the input and output required by the database adapter service based on this configuration.

image

After deployment, the CustomerService can easily be tested. Given a location, it will return a list of customer identifiers.

image

 

 

Implementation of the OrderService

imageThe implementation of the OrderService takes place at two levels. First at the database level – where a PL/SQL package is created to produce a collection of Order objects based on a collection of Customer identifiers. This package is created, deployed and tested on its own. The second step involves the SOA Suite: a database adapter configuration is created to invoke this PL/SQL package and using a Mediator mapping a pretty service interface to this database adapter the composite is completed, deployed and also tested.

imageStep one is not SOA Suite specific. The PL/SQL package that returns the collection of Order Records is an example of a encapsulated service – not your typical web service but instead a PL/SQL based API and implementation which is a service just as well. The package contains a SQL query that retrieves records from the ORDERS table. The records are filtered by CTR_ID (the column that contains the customer identifier). This is done using a join to a Nested Table rather than a WHERE condition. Read on, and the what and how are revealed.

The ORDER details are returned from the package as a nested table of objects – using the very powerful yet fairly little known database mechanism of objects and table of objects. Using these next SQL statements, the object and the table of object types are created in the database:

image

These statements create a type ORDER_TABLE_T – a collection of ORDER_T objects – and ORDER_T itself that describes an object with five attributes. Instances of these objects can be created – in SQL and PL/SQL – and they can be passed around between for example the database adapter and a PL/SQL package.

The package specification that the database adapter will be created against is defined as follows:

image

The function find_orders is invoked with a number_table_t – defined as CREATE TYPE NUMBER_TABLE_T AS TABLE OF NUMBER – that contains the identifiers of the customers whose orders should be retrieved. Other search criteria are the status of the orders and an upper and lower boundary for the order amount. The function returns an instance of order_table_t – which means it returns a collection of order_t objects.

The implementation of the function is fairly straightforward – if you are familiar with the use of nested table collections, the COLLECT aggregation operator and the TABLE FUNCTION operator in SQL. Note that these features were introduced in Oracle Database 8.0, 10g and 9i respectively. They have been around for a while.

image

The most interesting part of the function is highlighted in the red rectangle: table ORDERS is joined with something that is not a real table but that behaves as one. Using the TABLE function on the nested table collection with customer identifiers (p_customers_tbl) the query behaves as if a table with customer identifiers does indeed exist (with a single column whose value we access using the pseudo function column_value) and is joined on the CTR_ID column with the ORDERS table. Every ORDER record that is returned by the query is transformed into an OBJECT_T instance. All these OBJECT_T instances are taken together – with the COLLECT operator – and cast to the ORDER_TABLE_T type. An instance of that latter type is what we need &#8211,Samsung Galaxy Note 2 N7100 Hoesjes; because it is the return type of the function.

A simple test – in PL/SQL – of this function looks like this:

image

and the output through the server output in this case:

image

With this PL/SQL package in place, the OrderService is quickly created using a database adapter. Note that the database adapter is remarkably good at dealing with Object Types and Nested Table collections.

image

The composite is further fleshed out – based on the predefined WSDL and XSD for the OrderService and using a Mediator to map from that external interface to whatever the database adapter is offering:

image

The mapping in the transformation is fairly simple – because of the very adequate conversion performed by the database adapter from the database types of ORDER_TABLE_T and ORDER_T to their counterpart XSD types:

image

After deployment of the composite OrderService, we can test it – with the same input as the PL/SQL test of the ORDER_SERVICE_IMPL package:

image

At this point both the domain services have been created and deployed. We are ready to create the composite CustomerOrderService. Note that the hardest work has been done by now, inside the PL/SQL package.

 

Implementation of the Composite CustomerOrderService

BPEL’s core strength is orchestrating web service calls. It is the perfect tool in many occasions to create a composite service. The CustomerOrderService is such a composite service – a service that uses multiple other services for its implementation. This architecture view tells the story of the CustomerOrderService:

image

This service invokes the CustomerService (in the CRM Domain) as well as the OrderService (in the SALES domain). BPEL is used to implement the composite service. This overview of the SCA Composite for the CustomerOrderService says it all:

image

WebService References are created in the composite for each of the two domain level services that need to be invoked. A BPEL component is added and wired to these two references. The CustomerOrderService is exposed as service and wired to the BPEL component. In go a location and out comes a list of orders:

image

The BPEL process itself is easily described. Step one – the first scope – consists of invoking the CustomerService to retrieve a list of customer identifiers for all customers located at the designated location. The second step – scope number two – entails calling the OrderService with this list of identifiers in order to retrieve the sought after order details.

image

A global BPEL variable is used to store the list of identifiers and carry over this list from the first to the second scope. The second scope contains two Transform activities – one to map the customer identifiers to the input variable for the OrderService and the second one to map the output from the OrderService call (the order records) to the output variable of the BPEL process.

image

Let us do a simple test: find all the open orders for customers located in Zoetermeer:

image

The message flow trace makes it clear what happened during the execution of the CustomerOrderService:

image

A BPEL process is instantiated. It invokes the CustomerService – that in turn makes a single call to a database adapter service (for the query against CUSTOMERS). Next the BPEL process goes on to invoke the OrderService (just a single invocation). This composite too invokes a database adapter service (for the query against the ORDERS table). This is also a single call – a single round trip from SOA Suite run time to the database.

We have retrieved records based on a join across the domains CRM and SALES – without actually creating a dependency between the two databases involved – and without sacrificing [a lot]in terms of elegance, performance and scalability.

Resources

Download the JDeveloper 11.1.1.7 (11gR1 PS6) workspace with all the sources discussed in this article: CrossServiceJoin,zip.

Architecture collect data consolidation data management database adapter domain join nested table service soa soa suite table function

Commentaires

« Previous entries Page suivante » Page suivante »