Yuval Ararat

Continues lerner eager to explore

May 09 2017

Stress Testing customisations on AEM Author with Siege

Turns out most of you want to make sure your server will deal with your complex authoring requirements and data structures. In a general sense if you wish to test the capability of your server in reference to the infrastructure and baseline it you should go and run ToughDay and feel all warm and fuzzy. This is a great practice and has been tried and tested many times in many clients proving to have successfully indicated the performance of the system.

But if you are here, you are not looking for the default best practice but a solution to your need, stress testing on small-scale of your code to make sure you wrote something that does not kill the server. It might be a servlet, or a renderer or anything you wish to run on any of your environment. well let me

Siege is a stress testing tool for single URL through bash. if you have not used it in the past read about it here. it is old, but it is also simple and that is its value in the KISS development process.

I truly enjoy it as it allows me to have some insight into a few areas while i develop, simple way of stressing a servlet i just wrote to see if i have memory leaks or ill affects on Tar growth or just watching my server response time as i run maintanance. During the years i found many benefits to it in a small-scale developer scenario.

In edge cases where i know that there is a page that is giving grief on a production environment using Siege has been quite useful, you easily simulate a load in a different AEM instance with the production code and content.

Made some tough to negotiate situations much easier with a quick insight into the problem in an instance you can monitor closely through JMX etc.
All from a command in Terminal whilst you watch the calls go by

When encountering AEM Author performance degradation i have encountered a few hiccups on how i can pass the credentials to siege to make this work.

The most reliable way i found to get the login going was through the config file, depending on the version you installed it will be in different locations. my local make got a /usr/local/bin/siege.config file whilst the ones from brew will get a Cellar location with a siegerc file.

Either way the file format is consistent on all if you are in the latest version (2012 vintage..)

Sample is here https://github.com/JoeDog/siege/blob/master/doc/siegerc.in

I add both the following changes to make sure the login works, you could also use the last login to work as part of the URL list you want the Siege client to run against.

  • login = admin:admin
  • login-url = http://localhost:4502/libs/granite/core/content/login.html/j_security_check POST _charset_=utf-8&j_username=admin&j_password=admin&j_=true

In the configuration you could also add SSL and Proxy when needed.

Hope this is helpful.

Written by Yuval Ararat · Categorized: Adobe, AEM, Content Management, CQ, Experience Manager, Performance · Tagged: AEM, Digital Marketing, Digital Media, Experience Manager

Dec 20 2011

Switch that cms.

Companies wish to refresh their CMS implementations from time to time, that is natural and that is the evolution of the Internet, disguised as a revolutionary organisational change. Some companies who have not followed the product versions they are on, due to varios not fully understood reasons, face a big hurdle when they try to tackle a refresh 5 years after the implementation was complete.
Companies at that stage look for the upgrade path and find that its not as simple as they might think, products have dramatically changed, API and development need to be done from scratch or minimal reuse and IT will need to redesign implementation and plugins from webservers.
These hurdles usually direct those companies to a path examining other products, which in turn makes the current product look really bad when compared to the modern competitors.
When comparing the content editing interfaces of few years back you see that Internet evolution has not ceased and the new interfaces are slick, quick and easy to use.
To change your CMS you need to go through a very elaborate process, you need to remodel your data, refresh your site, rebuild your CMS data structure and content interfaces. You will need to bring the old data through from your old CMS through some sort of migration and while your at it why not get social media integration and some internal integrations that you always wished you have done.
But just like renovating a house, the problem with projects like this are the complexity and dependency between the sub projects and the unknown factor.
Your data migration depends on your data remodeling and that requires your wireframes and content strategy to be designed first making the migration design fase delayed. You want to integrate to the companies internal SAP product availability but require the catalogues to be migrated and these are delayed from the wireframe and content remodelling activities.
You get the point.
Every change in every project carries changes in the other projects and simple changes to one project can be cumbersome to others.
So why not spread it around?
Start from your new CMS and build up, stage by stage. Solidify one and get the next on top.
You cant paint the walls before you plaster them and you cant plaster them before the frame.
So my advice will be to set your expectations as follows, expect to have the site working before you get your data migrated and before the integration points are working.
Make the work a step ladder and climb only when you feel secure the other step will hold.
But you might state that this will create problems when you identify changes that are needed in the step under you.
This is true that it could take more to fix. But there is a benefit in loosing so many management and layers to figure out the same and coordinate that Change earlier in the process.

Written by Yuval Ararat · Categorized: Content Management

Jun 22 2011

OpenText Website Management (reddot) social communities howto

My first howto in the OpenText world, after almost 4 years in the asylum. nice.
Social communities on the Website Management offers a great assortment of features that is enabling you to support User Generated content.
But the standard feature dont show you how you can integrate a comment section under your articles.
That is part due to the way this implementation came to be.
The core of this implementation is the Vignette Community Application, a stand alone interface to forums, blogs, wikis, ideas and media spaces. this core assumes the full functionality in a page and thus is not interfaced in a way that you easily figure out where are the components. Its sole brother (by core at least) is the Vignette Community Services which took the integration, rather then standalone, from his brother and is a set of components easily integrated into your environment.
Because both have the same core, the Social communities will support every call you can think off. that is the great news.
So how do you go about and create the comments region under the article of yours.
Lets start with the piping.
You will need to create HTTP connectors to the following XAPI calls:

  1. Create New Object
  2. Delete Objects

Before i will start with the code lets look at the way we will implement the creation of comments.
Assuming we want comments on pages in the CMS that have an ID, without it you cannot differentiate the pages, we will need to create a remote object to represent the page.
The remote object is capable of uniquely identifying the comments, the Remote object is the ID of the Comments parent in our case though it can be responsible to more.
We will start with creating a new HTTP connector for the creation of the remote object.
Create a new HTTP connector group for your site.
Click on Prepared Operations.
Then create a new operation using the star (Add a new data group) at the top left of the screen.
Give the group a name – “remoteobject.create”
add the URL postix – “CreateNewObject”
Method should be “Post”

Go to the Request Parameters and add the following.

  • extObjType
  • extObjRealm
  • extObjSystemType
  • extObjSystemID
  • extObjContext
  • extObjID
  • name
  • type

Do the same to the delete operation
Give the group a name – “remoteobject.delete”
add the URL postix – “DeleteObjects”
Method should be “Post”
In this case you only need to point the objectID (x.x.x) for it to be deleted.

Now we have the ability to create the basic item that is capable of holding comments, ratings etc.
This method will enable you to later expand with creation of comments and ratings on the remote object.
The best place to figure out the required parameters is in the developer guide for the Vignette Community Application and the XML API Documentation

Written by Yuval Ararat · Categorized: Content Management, Enterprise 2.0, OpenText

May 20 2011

Great Intranets

After a day in ibf24 from ibf I was chugging along until Jonathan Phillips contacted me through Twitter. we had a nice discussion about the implementation of intranets, is the budget the main factor in determining if the project will be a success or are there more factors.

My take was that it is not just budget; although budget does set the tone and can influence the size it is still not the deciding factor. You can do amazing things on the smallest budget if you keep the focus of the goals and implement them rigorously, for example i will take WWF intranet, which is a combination of Google Apps and a CMS.

During the 24 hrs which i partaken in only a few (10) of them we were exposed to many intranets of organisations, it was like having a door open to the heart of other organisations and check to see how they are doing things. the good thing about this was you got to see some shoe string operations with amazing implementations when it comes to the adaptability of the intranet to the users and some major brands with intranets that seem to be inactive or lonely.

During our discussion Jonathan also pointed me to his blogpost describing the characteristic of a successful intranet and asked me to respond.

This is my response to Jonathan’s post.

I will start with the definition of Great, i believe it lacks the context and thus encompasses things that are cultural and things that are technological.
Important and significant is only a valid point if the intranet is doing its job in delivering the content in a manner that is useful and engaging. When this happens you get a site that is important and significant to the company, only if users use it it is important and this is a product not a goal.

This logic applies to Wonderful, First Rate, very good, remarkable and consequential, this is not something you can target when implementing a service nor when maintaining it. It could be of extraordinary powers, very admirable, unusual and considerable in degree, power, intensity. these things can be planed but usually cost much if the system we are replacing is great and likeable.

For references I will list the Characteristics Jonathan pointed
1. An open, multi-way communication vehicle: Top Down, Bottom Up, Peer-to-Peer
2. A facilitator of enterprise collaboration
3. An executor of business transactions
4. A tool that positively impacts every job in your company
5. A gateway to business knowledge
6. A digital reflection of the values of the company
7. Serves to build enterprise community
8. Transparent governance, management and strategy
9. An engaging space
10. Available where your employees need it
I agree with all of the other items, they are the corner stone of the intranet in my opinion, but the one thing i want to talk about here is how do you achieve this.

There is an illusion that all these characteristics relate to a single entity and thus translate this to a single product to solve the problem.

This is nice if you have a very limited team with a non-diverse need. If that is the case you can probably suffice with a good WordPress implementation and be done with it.

Most cases are not this easy and require a more complex environment to facilitate the users needs.

The question is of how we assemble this ménage of solutions? Do we turn to an all encompassing solution that has the potential to flop and make the whole intranet look like a joke? do we assemble it with products?
Who makes the decision of what product to implement and how do we know which one is the best for our users?
In my experience, the implementations I have found to be most successful were experiments in their youth. They were implemented from a need of a certain group and then spread to the organisation.
I also like to look at the economy of products in the organisation, much like a startup some products in the intranet get a lot of traction and some don’t. this economy environment lets you choose the solutions that match the crowd.
As oppose to core solutions that are there for a predefined business process our intranet is a service for the users in order to get the core business process done more effectively.
Since these are not mandatory system and they come to support the processes we have the privilege of experimenting and failing, the experiments should be like little staretups in the intranet, if they get to pitch and show value they stay if they don’t they go.
The merit of a solutions value should be based on the “Value to the user”/Cost if it is greater then 1 we are winning if it is smaller we are losing. in my personal opinion 1 is a great equilibrium for some applications.
The process i am suggesting is this:
1. Check to see what groups are using today and figure out if they are pleased or not. there could be some wiki’s and other tools lurking in the groups.
2. Let the groups experiment with the tools on the market and choose the ones to be tested.
3. Put analytic tools on the solutions tested to get the usage and let the users start working with the tools.
4. Check after a period of time what tools were used most.
5. Check to see how they helped and if they stand the merit of exposure to the whole intranet.
6. If they do seem like a good candidate to solve an unsolved problem in the organisation merge them into the intranet.
7. Check the value, rinse and repeat
This way like Lego blocks you will pick the matching tools for your people and not force them to use the technology that looked cool in the sales pitch.
if the tools are SaaS, like Yammer, then use them yourself and try to get people to send emails to you with the success stories from those tools.

On another note, this method holds some problems in it that will be present whenever we dont go with the single entity approach, it lacks the integration between solutions. this is the one thing that is a requirement on the developers to tailor the integration and find the solution for interoperability when there is no standard available.
This will be the biggest hurdle but it is still not as big as picking the wrong software for your users, some of the SaaS give a great solution as they integrate to the dashboard’s and websites easily.

Oh just something from the ibf24 twitter feed, intranet in 3 months is not a valid response, 3 months for the WCM might be ok but development of the product does not stop here.

Written by Yuval Ararat · Categorized: Content Management, Enterprise 2.0

May 17 2011

Content Staging and Virtual Staging

I was just reading over at Gadgetopia the post by Deane Barker about Content Staging and the merits of Virtual Staging of content. i was impressed by Dean of exposing the disadvantages of the Virtual Staging methodology especially when arises the need to change Menu structure.
When i come to think about an Architecture like this i think there is a balance point that is better then virtual or actual. Since we have to consider other parameters to the equation such as redundancy stability and productivity my take on the staging is a bit complex.
A Decoupled CMS/WCM is the term you might use for the architecture but the nature of the implementation does not mandate the software to directly support this.
I will start with a diagram of the architecture and then dive into the water.
Publishing CMS architecture
As you can see i have quite a strong opinion in regards to the content staging and publishing. 🙂
The process i see is a decoupled CMS that is a hybrid approach of a single server with multiple endpoints, the content generation is done in the internal facing content management server and staged through the application and web server so that it can be examined when a workflow is informing the editor of an upcoming change or new content and enables the creator to examine his work in context, i am a supporter of the in-context editing.
This is not the pure Decoupled system with many to many relationship as i have not seen a successful implementation of such a system yet.
But now that we have separated the content editing from the live content we have some ease of use and the ability to do any thing we want on the content editing site and later publish it to the website.
The publishing method can be any set of things, it could be DB replication and having to work on the same SAN/NAS or it could be file sync and DB injections, any method that has the stability and is well maintained is a good method.
The DR of the system has much more to offer as the environments are unconnected and can be replicated to a Hot Cold or Hot Hot scenarios, the ability to push the content in several data centre’s is also natural to a system like this.
As for the direction of content and code, or the Backward Forward dance “Code moves forward. Content moves backward.” blog post by Seth Gottlieb, in this scenario the whole server farm is our production and the content moves from here down the glide path.
The separation is based on the ability to separate the core from the content editors UI, letting the application interact with the API (core) of the product away from the content editors server, in some products this separation exists in a natural form and you will not need to manipulate the product, in some cases you will have to create that separation in a more complex way, but it is possible in most products.

Written by Yuval Ararat · Categorized: Content Management, Software Architecture

  • 1
  • 2
  • 3
  • Next Page »

Copyright © 2022